Test Report: QEMU_macOS 19302

                    
                      686e9da65a2d4195f8e8610efbc417c3b07d1722:2024-07-19:35416
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 20.41
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 9.85
36 TestAddons/Setup 10.05
37 TestCertOptions 10.06
38 TestCertExpiration 195.25
39 TestDockerFlags 10.08
40 TestForceSystemdFlag 9.95
41 TestForceSystemdEnv 10.05
47 TestErrorSpam/setup 9.92
56 TestFunctional/serial/StartWithProxy 9.85
58 TestFunctional/serial/SoftStart 5.25
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
70 TestFunctional/serial/MinikubeKubectlCmd 0.74
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.98
72 TestFunctional/serial/ExtraConfig 5.25
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.07
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.12
86 TestFunctional/parallel/ServiceCmdConnect 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.11
91 TestFunctional/parallel/CpCmd 0.28
93 TestFunctional/parallel/FileSync 0.07
94 TestFunctional/parallel/CertSync 0.28
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
104 TestFunctional/parallel/Version/components 0.04
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.03
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
109 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
111 TestFunctional/parallel/DockerEnv/bash 0.04
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
115 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
116 TestFunctional/parallel/ServiceCmd/List 0.04
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.05
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
119 TestFunctional/parallel/ServiceCmd/Format 0.04
120 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.07
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 107.75
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.31
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.27
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.12
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 36.7
150 TestMultiControlPlane/serial/StartCluster 9.93
151 TestMultiControlPlane/serial/DeployApp 104.66
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.07
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
156 TestMultiControlPlane/serial/CopyFile 0.06
157 TestMultiControlPlane/serial/StopSecondaryNode 0.11
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
159 TestMultiControlPlane/serial/RestartSecondaryNode 46.51
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.08
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 8.03
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
164 TestMultiControlPlane/serial/StopCluster 2.18
165 TestMultiControlPlane/serial/RestartCluster 5.25
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
167 TestMultiControlPlane/serial/AddSecondaryNode 0.07
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
171 TestImageBuild/serial/Setup 9.89
174 TestJSONOutput/start/Command 9.92
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.04
203 TestMinikubeProfile 10.04
206 TestMountStart/serial/StartWithMountFirst 9.96
209 TestMultiNode/serial/FreshStart2Nodes 9.83
210 TestMultiNode/serial/DeployApp2Nodes 97.41
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.07
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.08
215 TestMultiNode/serial/CopyFile 0.06
216 TestMultiNode/serial/StopNode 0.14
217 TestMultiNode/serial/StartAfterStop 42.34
218 TestMultiNode/serial/RestartKeepsNodes 9.42
219 TestMultiNode/serial/DeleteNode 0.11
220 TestMultiNode/serial/StopMultiNode 3.51
221 TestMultiNode/serial/RestartMultiNode 5.26
222 TestMultiNode/serial/ValidateNameConflict 19.97
226 TestPreload 9.93
228 TestScheduledStopUnix 9.93
229 TestSkaffold 12.47
232 TestRunningBinaryUpgrade 585.94
234 TestKubernetesUpgrade 16.93
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.11
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.02
250 TestStoppedBinaryUpgrade/Upgrade 577.78
252 TestPause/serial/Start 9.82
262 TestNoKubernetes/serial/StartWithK8s 9.75
263 TestNoKubernetes/serial/StartWithStopK8s 5.29
264 TestNoKubernetes/serial/Start 5.29
268 TestNoKubernetes/serial/StartNoArgs 5.32
270 TestNetworkPlugins/group/auto/Start 9.8
271 TestNetworkPlugins/group/flannel/Start 9.77
272 TestNetworkPlugins/group/enable-default-cni/Start 9.87
273 TestNetworkPlugins/group/bridge/Start 9.77
274 TestNetworkPlugins/group/kubenet/Start 9.83
275 TestNetworkPlugins/group/kindnet/Start 9.83
276 TestNetworkPlugins/group/calico/Start 9.8
277 TestNetworkPlugins/group/custom-flannel/Start 9.88
278 TestNetworkPlugins/group/false/Start 9.85
280 TestStartStop/group/old-k8s-version/serial/FirstStart 9.79
281 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
282 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
286 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
290 TestStartStop/group/old-k8s-version/serial/Pause 0.1
292 TestStartStop/group/no-preload/serial/FirstStart 9.95
293 TestStartStop/group/no-preload/serial/DeployApp 0.09
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.15
297 TestStartStop/group/no-preload/serial/SecondStart 5.25
298 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
300 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/no-preload/serial/Pause 0.1
303 TestStartStop/group/embed-certs/serial/FirstStart 11.57
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.83
306 TestStartStop/group/embed-certs/serial/DeployApp 0.09
307 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
310 TestStartStop/group/embed-certs/serial/SecondStart 5.96
311 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.25
316 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
317 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
318 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
319 TestStartStop/group/embed-certs/serial/Pause 0.1
321 TestStartStop/group/newest-cni/serial/FirstStart 9.93
322 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
323 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
324 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
330 TestStartStop/group/newest-cni/serial/SecondStart 5.26
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
334 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (20.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-549000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-549000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (20.404328958s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"59c67fba-20e5-4a3d-a64b-b6e5d1daebd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-549000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0fc50e3e-d558-447d-bc9c-d638e22167b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19302"}}
	{"specversion":"1.0","id":"4bb375ca-4eb9-4bf1-af96-7f055500509f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig"}}
	{"specversion":"1.0","id":"d41029e8-a41f-433b-b2f5-fa3e4640b326","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"5bdf7626-dfb6-4a8e-b0d9-28e6977c62a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1d8963c5-0003-4d21-ae3f-bad0ccce4405","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube"}}
	{"specversion":"1.0","id":"87481ddb-6e42-42a5-8089-8228bc119614","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"c40e1c74-dbc1-41a1-b5e4-dce7ecc6e492","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c68c5687-d375-451a-b1f7-8b772c84407e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"2ccf79b6-a7d8-4709-8370-ae4870ea3b15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fec57fbc-0a71-4846-a23a-f3adc4538ad8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-549000\" primary control-plane node in \"download-only-549000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b84fde7b-13c4-426a-bead-249e4edfb9ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7dc8f840-b2cb-4d43-b92d-f8fc4bbfc739","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108881a60 0x108881a60 0x108881a60 0x108881a60 0x108881a60 0x108881a60 0x108881a60] Decompressors:map[bz2:0x14000704270 gz:0x14000704278 tar:0x14000704220 tar.bz2:0x14000704230 tar.gz:0x14000704240 tar.xz:0x14000704250 tar.zst:0x14000704260 tbz2:0x14000704230 tgz:0x14
000704240 txz:0x14000704250 tzst:0x14000704260 xz:0x14000704280 zip:0x14000704290 zst:0x14000704288] Getters:map[file:0x140008887d0 http:0x1400087a410 https:0x1400087a460] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"4a503816-c61c-4201-b9c1-092aa13eb329","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:19:30.806428    6475 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:19:30.806578    6475 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:19:30.806581    6475 out.go:304] Setting ErrFile to fd 2...
	I0719 07:19:30.806584    6475 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:19:30.806701    6475 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	W0719 07:19:30.806776    6475 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19302-5980/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19302-5980/.minikube/config/config.json: no such file or directory
	I0719 07:19:30.808184    6475 out.go:298] Setting JSON to true
	I0719 07:19:30.825868    6475 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4739,"bootTime":1721394031,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:19:30.825941    6475 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:19:30.830661    6475 out.go:97] [download-only-549000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:19:30.830846    6475 notify.go:220] Checking for updates...
	W0719 07:19:30.830895    6475 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball: no such file or directory
	I0719 07:19:30.835623    6475 out.go:169] MINIKUBE_LOCATION=19302
	I0719 07:19:30.839228    6475 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:19:30.845238    6475 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:19:30.848236    6475 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:19:30.851489    6475 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	W0719 07:19:30.859677    6475 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 07:19:30.859943    6475 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:19:30.863469    6475 out.go:97] Using the qemu2 driver based on user configuration
	I0719 07:19:30.863497    6475 start.go:297] selected driver: qemu2
	I0719 07:19:30.863513    6475 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:19:30.863588    6475 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:19:30.865677    6475 out.go:169] Automatically selected the socket_vmnet network
	I0719 07:19:30.871524    6475 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0719 07:19:30.871619    6475 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 07:19:30.871690    6475 cni.go:84] Creating CNI manager for ""
	I0719 07:19:30.871707    6475 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 07:19:30.871788    6475 start.go:340] cluster config:
	{Name:download-only-549000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:19:30.875475    6475 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:19:30.880101    6475 out.go:97] Downloading VM boot image ...
	I0719 07:19:30.880128    6475 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso
	I0719 07:19:37.351861    6475 out.go:97] Starting "download-only-549000" primary control-plane node in "download-only-549000" cluster
	I0719 07:19:37.351911    6475 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 07:19:37.411824    6475 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0719 07:19:37.411844    6475 cache.go:56] Caching tarball of preloaded images
	I0719 07:19:37.412031    6475 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 07:19:37.416221    6475 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0719 07:19:37.416228    6475 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 07:19:37.501051    6475 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0719 07:19:50.086542    6475 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 07:19:50.086707    6475 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 07:19:50.782351    6475 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0719 07:19:50.782564    6475 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/download-only-549000/config.json ...
	I0719 07:19:50.782597    6475 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/download-only-549000/config.json: {Name:mk80651b58e82497ca1ac3cf10697acde6242843 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:19:50.782839    6475 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 07:19:50.783695    6475 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0719 07:19:51.131974    6475 out.go:169] 
	W0719 07:19:51.135951    6475 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108881a60 0x108881a60 0x108881a60 0x108881a60 0x108881a60 0x108881a60 0x108881a60] Decompressors:map[bz2:0x14000704270 gz:0x14000704278 tar:0x14000704220 tar.bz2:0x14000704230 tar.gz:0x14000704240 tar.xz:0x14000704250 tar.zst:0x14000704260 tbz2:0x14000704230 tgz:0x14000704240 txz:0x14000704250 tzst:0x14000704260 xz:0x14000704280 zip:0x14000704290 zst:0x14000704288] Getters:map[file:0x140008887d0 http:0x1400087a410 https:0x1400087a460] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0719 07:19:51.135976    6475 out_reason.go:110] 
	W0719 07:19:51.143855    6475 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:19:51.147873    6475 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-549000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (20.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.85s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-543000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-543000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.702088875s)

                                                
                                                
-- stdout --
	* [offline-docker-543000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-543000" primary control-plane node in "offline-docker-543000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-543000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:31:32.016302    8148 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:31:32.016432    8148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:31:32.016434    8148 out.go:304] Setting ErrFile to fd 2...
	I0719 07:31:32.016437    8148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:31:32.016549    8148 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:31:32.017738    8148 out.go:298] Setting JSON to false
	I0719 07:31:32.034830    8148 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5461,"bootTime":1721394031,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:31:32.034897    8148 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:31:32.039704    8148 out.go:177] * [offline-docker-543000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:31:32.046780    8148 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:31:32.046786    8148 notify.go:220] Checking for updates...
	I0719 07:31:32.052672    8148 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:31:32.055655    8148 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:31:32.058732    8148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:31:32.061720    8148 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:31:32.064616    8148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:31:32.068025    8148 config.go:182] Loaded profile config "multinode-023000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:31:32.068085    8148 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:31:32.071717    8148 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:31:32.078699    8148 start.go:297] selected driver: qemu2
	I0719 07:31:32.078710    8148 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:31:32.078717    8148 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:31:32.080604    8148 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:31:32.083659    8148 out.go:177] * Automatically selected the socket_vmnet network
	I0719 07:31:32.086709    8148 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:31:32.086744    8148 cni.go:84] Creating CNI manager for ""
	I0719 07:31:32.086753    8148 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:31:32.086756    8148 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 07:31:32.086799    8148 start.go:340] cluster config:
	{Name:offline-docker-543000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-543000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:31:32.090441    8148 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:31:32.097686    8148 out.go:177] * Starting "offline-docker-543000" primary control-plane node in "offline-docker-543000" cluster
	I0719 07:31:32.101679    8148 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:31:32.101718    8148 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:31:32.101727    8148 cache.go:56] Caching tarball of preloaded images
	I0719 07:31:32.101804    8148 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:31:32.101809    8148 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:31:32.101875    8148 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/offline-docker-543000/config.json ...
	I0719 07:31:32.101886    8148 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/offline-docker-543000/config.json: {Name:mk780733d8cb68e6d87c83581bb86da903b89353 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:31:32.102152    8148 start.go:360] acquireMachinesLock for offline-docker-543000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:31:32.102184    8148 start.go:364] duration metric: took 25.916µs to acquireMachinesLock for "offline-docker-543000"
	I0719 07:31:32.102194    8148 start.go:93] Provisioning new machine with config: &{Name:offline-docker-543000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-543000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:31:32.102232    8148 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:31:32.106688    8148 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 07:31:32.122517    8148 start.go:159] libmachine.API.Create for "offline-docker-543000" (driver="qemu2")
	I0719 07:31:32.122547    8148 client.go:168] LocalClient.Create starting
	I0719 07:31:32.122612    8148 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:31:32.122644    8148 main.go:141] libmachine: Decoding PEM data...
	I0719 07:31:32.122653    8148 main.go:141] libmachine: Parsing certificate...
	I0719 07:31:32.122701    8148 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:31:32.122723    8148 main.go:141] libmachine: Decoding PEM data...
	I0719 07:31:32.122734    8148 main.go:141] libmachine: Parsing certificate...
	I0719 07:31:32.123144    8148 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:31:32.247905    8148 main.go:141] libmachine: Creating SSH key...
	I0719 07:31:32.353889    8148 main.go:141] libmachine: Creating Disk image...
	I0719 07:31:32.353897    8148 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:31:32.354062    8148 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/offline-docker-543000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/offline-docker-543000/disk.qcow2
	I0719 07:31:32.363408    8148 main.go:141] libmachine: STDOUT: 
	I0719 07:31:32.363436    8148 main.go:141] libmachine: STDERR: 
	I0719 07:31:32.363505    8148 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/offline-docker-543000/disk.qcow2 +20000M
	I0719 07:31:32.372392    8148 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:31:32.372413    8148 main.go:141] libmachine: STDERR: 
	I0719 07:31:32.372434    8148 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/offline-docker-543000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/offline-docker-543000/disk.qcow2
	I0719 07:31:32.372439    8148 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:31:32.372456    8148 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:31:32.372482    8148 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/offline-docker-543000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/offline-docker-543000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/offline-docker-543000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:59:8d:9d:5c:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/offline-docker-543000/disk.qcow2
	I0719 07:31:32.375321    8148 main.go:141] libmachine: STDOUT: 
	I0719 07:31:32.375342    8148 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:31:32.375362    8148 client.go:171] duration metric: took 252.811375ms to LocalClient.Create
	I0719 07:31:34.377428    8148 start.go:128] duration metric: took 2.275203041s to createHost
	I0719 07:31:34.377457    8148 start.go:83] releasing machines lock for "offline-docker-543000", held for 2.275283959s
	W0719 07:31:34.377474    8148 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:31:34.382687    8148 out.go:177] * Deleting "offline-docker-543000" in qemu2 ...
	W0719 07:31:34.397652    8148 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:31:34.397664    8148 start.go:729] Will try again in 5 seconds ...
	I0719 07:31:39.399869    8148 start.go:360] acquireMachinesLock for offline-docker-543000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:31:39.400320    8148 start.go:364] duration metric: took 350.5µs to acquireMachinesLock for "offline-docker-543000"
	I0719 07:31:39.400432    8148 start.go:93] Provisioning new machine with config: &{Name:offline-docker-543000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-543000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:31:39.400689    8148 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:31:39.410254    8148 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 07:31:39.461323    8148 start.go:159] libmachine.API.Create for "offline-docker-543000" (driver="qemu2")
	I0719 07:31:39.461376    8148 client.go:168] LocalClient.Create starting
	I0719 07:31:39.461491    8148 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:31:39.461560    8148 main.go:141] libmachine: Decoding PEM data...
	I0719 07:31:39.461578    8148 main.go:141] libmachine: Parsing certificate...
	I0719 07:31:39.461653    8148 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:31:39.461700    8148 main.go:141] libmachine: Decoding PEM data...
	I0719 07:31:39.461713    8148 main.go:141] libmachine: Parsing certificate...
	I0719 07:31:39.462215    8148 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:31:39.589486    8148 main.go:141] libmachine: Creating SSH key...
	I0719 07:31:39.629903    8148 main.go:141] libmachine: Creating Disk image...
	I0719 07:31:39.629909    8148 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:31:39.630099    8148 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/offline-docker-543000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/offline-docker-543000/disk.qcow2
	I0719 07:31:39.639155    8148 main.go:141] libmachine: STDOUT: 
	I0719 07:31:39.639173    8148 main.go:141] libmachine: STDERR: 
	I0719 07:31:39.639217    8148 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/offline-docker-543000/disk.qcow2 +20000M
	I0719 07:31:39.647167    8148 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:31:39.647182    8148 main.go:141] libmachine: STDERR: 
	I0719 07:31:39.647191    8148 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/offline-docker-543000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/offline-docker-543000/disk.qcow2
	I0719 07:31:39.647197    8148 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:31:39.647207    8148 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:31:39.647242    8148 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/offline-docker-543000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/offline-docker-543000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/offline-docker-543000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:92:3d:02:a9:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/offline-docker-543000/disk.qcow2
	I0719 07:31:39.648846    8148 main.go:141] libmachine: STDOUT: 
	I0719 07:31:39.648863    8148 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:31:39.648876    8148 client.go:171] duration metric: took 187.495917ms to LocalClient.Create
	I0719 07:31:41.651046    8148 start.go:128] duration metric: took 2.250319708s to createHost
	I0719 07:31:41.651107    8148 start.go:83] releasing machines lock for "offline-docker-543000", held for 2.25077s
	W0719 07:31:41.651532    8148 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-543000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-543000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:31:41.659305    8148 out.go:177] 
	W0719 07:31:41.663271    8148 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:31:41.663298    8148 out.go:239] * 
	* 
	W0719 07:31:41.666317    8148 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:31:41.675185    8148 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-543000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-19 07:31:41.691335 -0700 PDT m=+730.967078834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-543000 -n offline-docker-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-543000 -n offline-docker-543000: exit status 7 (66.234875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-543000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-543000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-543000
--- FAIL: TestOffline (9.85s)

                                                
                                    
x
+
TestAddons/Setup (10.05s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-047000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-047000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.051949042s)

                                                
                                                
-- stdout --
	* [addons-047000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-047000" primary control-plane node in "addons-047000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-047000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:20:22.928295    6582 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:20:22.928417    6582 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:20:22.928421    6582 out.go:304] Setting ErrFile to fd 2...
	I0719 07:20:22.928423    6582 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:20:22.928559    6582 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:20:22.929558    6582 out.go:298] Setting JSON to false
	I0719 07:20:22.945744    6582 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4791,"bootTime":1721394031,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:20:22.945802    6582 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:20:22.950068    6582 out.go:177] * [addons-047000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:20:22.953059    6582 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:20:22.953161    6582 notify.go:220] Checking for updates...
	I0719 07:20:22.958000    6582 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:20:22.960987    6582 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:20:22.962294    6582 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:20:22.965015    6582 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:20:22.968005    6582 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:20:22.971233    6582 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:20:22.974934    6582 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:20:22.982016    6582 start.go:297] selected driver: qemu2
	I0719 07:20:22.982023    6582 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:20:22.982034    6582 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:20:22.984118    6582 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:20:22.986955    6582 out.go:177] * Automatically selected the socket_vmnet network
	I0719 07:20:22.990062    6582 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:20:22.990084    6582 cni.go:84] Creating CNI manager for ""
	I0719 07:20:22.990090    6582 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:20:22.990095    6582 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 07:20:22.990125    6582 start.go:340] cluster config:
	{Name:addons-047000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:20:22.993642    6582 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:20:23.000989    6582 out.go:177] * Starting "addons-047000" primary control-plane node in "addons-047000" cluster
	I0719 07:20:23.005033    6582 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:20:23.005049    6582 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:20:23.005067    6582 cache.go:56] Caching tarball of preloaded images
	I0719 07:20:23.005134    6582 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:20:23.005140    6582 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:20:23.005357    6582 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/addons-047000/config.json ...
	I0719 07:20:23.005373    6582 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/addons-047000/config.json: {Name:mk4abde1f627e112c3f8285e598326e3c82ca9a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:20:23.005701    6582 start.go:360] acquireMachinesLock for addons-047000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:20:23.005765    6582 start.go:364] duration metric: took 58.625µs to acquireMachinesLock for "addons-047000"
	I0719 07:20:23.005775    6582 start.go:93] Provisioning new machine with config: &{Name:addons-047000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:20:23.005802    6582 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:20:23.013973    6582 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0719 07:20:23.032867    6582 start.go:159] libmachine.API.Create for "addons-047000" (driver="qemu2")
	I0719 07:20:23.032904    6582 client.go:168] LocalClient.Create starting
	I0719 07:20:23.033045    6582 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:20:23.158613    6582 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:20:23.218895    6582 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:20:23.391958    6582 main.go:141] libmachine: Creating SSH key...
	I0719 07:20:23.507176    6582 main.go:141] libmachine: Creating Disk image...
	I0719 07:20:23.507181    6582 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:20:23.507366    6582 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/addons-047000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/addons-047000/disk.qcow2
	I0719 07:20:23.516624    6582 main.go:141] libmachine: STDOUT: 
	I0719 07:20:23.516640    6582 main.go:141] libmachine: STDERR: 
	I0719 07:20:23.516687    6582 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/addons-047000/disk.qcow2 +20000M
	I0719 07:20:23.524442    6582 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:20:23.524460    6582 main.go:141] libmachine: STDERR: 
	I0719 07:20:23.524474    6582 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/addons-047000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/addons-047000/disk.qcow2
	I0719 07:20:23.524477    6582 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:20:23.524502    6582 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:20:23.524539    6582 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/addons-047000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/addons-047000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/addons-047000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:9f:bd:3c:99:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/addons-047000/disk.qcow2
	I0719 07:20:23.526205    6582 main.go:141] libmachine: STDOUT: 
	I0719 07:20:23.526223    6582 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:20:23.526241    6582 client.go:171] duration metric: took 493.335583ms to LocalClient.Create
	I0719 07:20:25.528449    6582 start.go:128] duration metric: took 2.522632625s to createHost
	I0719 07:20:25.528518    6582 start.go:83] releasing machines lock for "addons-047000", held for 2.522760333s
	W0719 07:20:25.528585    6582 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:20:25.541922    6582 out.go:177] * Deleting "addons-047000" in qemu2 ...
	W0719 07:20:25.565111    6582 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:20:25.565156    6582 start.go:729] Will try again in 5 seconds ...
	I0719 07:20:30.567407    6582 start.go:360] acquireMachinesLock for addons-047000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:20:30.567971    6582 start.go:364] duration metric: took 447.875µs to acquireMachinesLock for "addons-047000"
	I0719 07:20:30.568117    6582 start.go:93] Provisioning new machine with config: &{Name:addons-047000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:20:30.568375    6582 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:20:30.579088    6582 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0719 07:20:30.629473    6582 start.go:159] libmachine.API.Create for "addons-047000" (driver="qemu2")
	I0719 07:20:30.629540    6582 client.go:168] LocalClient.Create starting
	I0719 07:20:30.629661    6582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:20:30.629725    6582 main.go:141] libmachine: Decoding PEM data...
	I0719 07:20:30.629742    6582 main.go:141] libmachine: Parsing certificate...
	I0719 07:20:30.629842    6582 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:20:30.629899    6582 main.go:141] libmachine: Decoding PEM data...
	I0719 07:20:30.629910    6582 main.go:141] libmachine: Parsing certificate...
	I0719 07:20:30.630496    6582 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:20:30.757719    6582 main.go:141] libmachine: Creating SSH key...
	I0719 07:20:30.895056    6582 main.go:141] libmachine: Creating Disk image...
	I0719 07:20:30.895070    6582 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:20:30.895259    6582 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/addons-047000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/addons-047000/disk.qcow2
	I0719 07:20:30.904352    6582 main.go:141] libmachine: STDOUT: 
	I0719 07:20:30.904376    6582 main.go:141] libmachine: STDERR: 
	I0719 07:20:30.904444    6582 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/addons-047000/disk.qcow2 +20000M
	I0719 07:20:30.912189    6582 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:20:30.912208    6582 main.go:141] libmachine: STDERR: 
	I0719 07:20:30.912222    6582 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/addons-047000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/addons-047000/disk.qcow2
	I0719 07:20:30.912229    6582 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:20:30.912242    6582 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:20:30.912273    6582 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/addons-047000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/addons-047000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/addons-047000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:a7:c2:70:d3:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/addons-047000/disk.qcow2
	I0719 07:20:30.913951    6582 main.go:141] libmachine: STDOUT: 
	I0719 07:20:30.913970    6582 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:20:30.913984    6582 client.go:171] duration metric: took 284.439125ms to LocalClient.Create
	I0719 07:20:32.916152    6582 start.go:128] duration metric: took 2.347722041s to createHost
	I0719 07:20:32.916223    6582 start.go:83] releasing machines lock for "addons-047000", held for 2.348241125s
	W0719 07:20:32.916667    6582 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-047000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-047000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:20:32.925223    6582 out.go:177] 
	W0719 07:20:32.930372    6582 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:20:32.930425    6582 out.go:239] * 
	* 
	W0719 07:20:32.933051    6582 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:20:32.939327    6582 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-047000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.05s)

                                                
                                    
x
+
TestCertOptions (10.06s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-480000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-480000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.803019167s)

                                                
                                                
-- stdout --
	* [cert-options-480000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-480000" primary control-plane node in "cert-options-480000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-480000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-480000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-480000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-480000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-480000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (82.68125ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-480000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-480000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-480000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-480000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-480000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-480000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.031625ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-480000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-480000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-480000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-480000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-480000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-19 07:32:11.928877 -0700 PDT m=+761.204824209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-480000 -n cert-options-480000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-480000 -n cert-options-480000: exit status 7 (30.124333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-480000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-480000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-480000
--- FAIL: TestCertOptions (10.06s)

                                                
                                    
x
+
TestCertExpiration (195.25s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-134000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-134000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.878188667s)

                                                
                                                
-- stdout --
	* [cert-expiration-134000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-134000" primary control-plane node in "cert-expiration-134000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-134000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-134000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-134000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-134000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-134000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.225343625s)

                                                
                                                
-- stdout --
	* [cert-expiration-134000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-134000" primary control-plane node in "cert-expiration-134000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-134000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-134000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-134000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-134000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-134000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-134000" primary control-plane node in "cert-expiration-134000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-134000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-134000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-134000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-19 07:35:12.002306 -0700 PDT m=+941.279466376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-134000 -n cert-expiration-134000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-134000 -n cert-expiration-134000: exit status 7 (65.60475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-134000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-134000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-134000
--- FAIL: TestCertExpiration (195.25s)

                                                
                                    
x
+
TestDockerFlags (10.08s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-033000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-033000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.8586245s)

                                                
                                                
-- stdout --
	* [docker-flags-033000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-033000" primary control-plane node in "docker-flags-033000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-033000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:31:51.912690    8342 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:31:51.912803    8342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:31:51.912806    8342 out.go:304] Setting ErrFile to fd 2...
	I0719 07:31:51.912809    8342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:31:51.912923    8342 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:31:51.913979    8342 out.go:298] Setting JSON to false
	I0719 07:31:51.930138    8342 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5480,"bootTime":1721394031,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:31:51.930208    8342 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:31:51.935979    8342 out.go:177] * [docker-flags-033000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:31:51.942944    8342 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:31:51.942996    8342 notify.go:220] Checking for updates...
	I0719 07:31:51.949991    8342 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:31:51.953000    8342 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:31:51.955974    8342 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:31:51.958916    8342 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:31:51.961992    8342 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:31:51.965253    8342 config.go:182] Loaded profile config "force-systemd-flag-710000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:31:51.965324    8342 config.go:182] Loaded profile config "multinode-023000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:31:51.965385    8342 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:31:51.968882    8342 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:31:51.974961    8342 start.go:297] selected driver: qemu2
	I0719 07:31:51.974967    8342 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:31:51.974973    8342 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:31:51.977313    8342 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:31:51.981920    8342 out.go:177] * Automatically selected the socket_vmnet network
	I0719 07:31:51.985016    8342 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0719 07:31:51.985046    8342 cni.go:84] Creating CNI manager for ""
	I0719 07:31:51.985054    8342 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:31:51.985060    8342 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 07:31:51.985092    8342 start.go:340] cluster config:
	{Name:docker-flags-033000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:31:51.989015    8342 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:31:51.995950    8342 out.go:177] * Starting "docker-flags-033000" primary control-plane node in "docker-flags-033000" cluster
	I0719 07:31:51.999803    8342 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:31:51.999821    8342 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:31:51.999834    8342 cache.go:56] Caching tarball of preloaded images
	I0719 07:31:51.999917    8342 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:31:51.999924    8342 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:31:51.999994    8342 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/docker-flags-033000/config.json ...
	I0719 07:31:52.000007    8342 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/docker-flags-033000/config.json: {Name:mk5a7027974700cfb5575156aa56876e5fdc921b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:31:52.000231    8342 start.go:360] acquireMachinesLock for docker-flags-033000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:31:52.000271    8342 start.go:364] duration metric: took 30.709µs to acquireMachinesLock for "docker-flags-033000"
	I0719 07:31:52.000283    8342 start.go:93] Provisioning new machine with config: &{Name:docker-flags-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:31:52.000311    8342 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:31:52.003977    8342 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 07:31:52.021835    8342 start.go:159] libmachine.API.Create for "docker-flags-033000" (driver="qemu2")
	I0719 07:31:52.021874    8342 client.go:168] LocalClient.Create starting
	I0719 07:31:52.021929    8342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:31:52.021959    8342 main.go:141] libmachine: Decoding PEM data...
	I0719 07:31:52.021968    8342 main.go:141] libmachine: Parsing certificate...
	I0719 07:31:52.022009    8342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:31:52.022033    8342 main.go:141] libmachine: Decoding PEM data...
	I0719 07:31:52.022041    8342 main.go:141] libmachine: Parsing certificate...
	I0719 07:31:52.022378    8342 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:31:52.139542    8342 main.go:141] libmachine: Creating SSH key...
	I0719 07:31:52.276089    8342 main.go:141] libmachine: Creating Disk image...
	I0719 07:31:52.276096    8342 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:31:52.276303    8342 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/docker-flags-033000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/docker-flags-033000/disk.qcow2
	I0719 07:31:52.285597    8342 main.go:141] libmachine: STDOUT: 
	I0719 07:31:52.285614    8342 main.go:141] libmachine: STDERR: 
	I0719 07:31:52.285652    8342 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/docker-flags-033000/disk.qcow2 +20000M
	I0719 07:31:52.293478    8342 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:31:52.293492    8342 main.go:141] libmachine: STDERR: 
	I0719 07:31:52.293513    8342 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/docker-flags-033000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/docker-flags-033000/disk.qcow2
	I0719 07:31:52.293517    8342 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:31:52.293529    8342 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:31:52.293554    8342 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/docker-flags-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/docker-flags-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/docker-flags-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:72:66:3b:7b:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/docker-flags-033000/disk.qcow2
	I0719 07:31:52.295215    8342 main.go:141] libmachine: STDOUT: 
	I0719 07:31:52.295233    8342 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:31:52.295253    8342 client.go:171] duration metric: took 273.377709ms to LocalClient.Create
	I0719 07:31:54.297411    8342 start.go:128] duration metric: took 2.29709375s to createHost
	I0719 07:31:54.297477    8342 start.go:83] releasing machines lock for "docker-flags-033000", held for 2.297211917s
	W0719 07:31:54.297524    8342 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:31:54.309533    8342 out.go:177] * Deleting "docker-flags-033000" in qemu2 ...
	W0719 07:31:54.330313    8342 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:31:54.330344    8342 start.go:729] Will try again in 5 seconds ...
	I0719 07:31:59.332466    8342 start.go:360] acquireMachinesLock for docker-flags-033000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:31:59.384673    8342 start.go:364] duration metric: took 52.090292ms to acquireMachinesLock for "docker-flags-033000"
	I0719 07:31:59.384781    8342 start.go:93] Provisioning new machine with config: &{Name:docker-flags-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:31:59.385050    8342 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:31:59.397511    8342 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 07:31:59.446135    8342 start.go:159] libmachine.API.Create for "docker-flags-033000" (driver="qemu2")
	I0719 07:31:59.446177    8342 client.go:168] LocalClient.Create starting
	I0719 07:31:59.446300    8342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:31:59.446369    8342 main.go:141] libmachine: Decoding PEM data...
	I0719 07:31:59.446386    8342 main.go:141] libmachine: Parsing certificate...
	I0719 07:31:59.446450    8342 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:31:59.446496    8342 main.go:141] libmachine: Decoding PEM data...
	I0719 07:31:59.446509    8342 main.go:141] libmachine: Parsing certificate...
	I0719 07:31:59.446985    8342 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:31:59.576954    8342 main.go:141] libmachine: Creating SSH key...
	I0719 07:31:59.680290    8342 main.go:141] libmachine: Creating Disk image...
	I0719 07:31:59.680295    8342 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:31:59.680484    8342 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/docker-flags-033000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/docker-flags-033000/disk.qcow2
	I0719 07:31:59.689779    8342 main.go:141] libmachine: STDOUT: 
	I0719 07:31:59.689807    8342 main.go:141] libmachine: STDERR: 
	I0719 07:31:59.689855    8342 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/docker-flags-033000/disk.qcow2 +20000M
	I0719 07:31:59.697642    8342 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:31:59.697655    8342 main.go:141] libmachine: STDERR: 
	I0719 07:31:59.697666    8342 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/docker-flags-033000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/docker-flags-033000/disk.qcow2
	I0719 07:31:59.697670    8342 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:31:59.697684    8342 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:31:59.697719    8342 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/docker-flags-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/docker-flags-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/docker-flags-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:ae:05:f5:4b:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/docker-flags-033000/disk.qcow2
	I0719 07:31:59.699344    8342 main.go:141] libmachine: STDOUT: 
	I0719 07:31:59.699357    8342 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:31:59.699367    8342 client.go:171] duration metric: took 253.186792ms to LocalClient.Create
	I0719 07:32:01.701521    8342 start.go:128] duration metric: took 2.316456708s to createHost
	I0719 07:32:01.701592    8342 start.go:83] releasing machines lock for "docker-flags-033000", held for 2.316881583s
	W0719 07:32:01.701904    8342 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:32:01.713559    8342 out.go:177] 
	W0719 07:32:01.718430    8342 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:32:01.718493    8342 out.go:239] * 
	* 
	W0719 07:32:01.721211    8342 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:32:01.728421    8342 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-033000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-033000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-033000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.801875ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-033000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-033000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-033000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-033000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-033000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-033000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-033000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-033000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-033000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (43.501917ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-033000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-033000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-033000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-033000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-033000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-033000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-19 07:32:01.86936 -0700 PDT m=+751.145239793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-033000 -n docker-flags-033000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-033000 -n docker-flags-033000: exit status 7 (28.933ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-033000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-033000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-033000
--- FAIL: TestDockerFlags (10.08s)

                                                
                                    
x
+
TestForceSystemdFlag (9.95s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-710000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-710000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.772789167s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-710000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-710000" primary control-plane node in "force-systemd-flag-710000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-710000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:31:46.970122    8318 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:31:46.970256    8318 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:31:46.970259    8318 out.go:304] Setting ErrFile to fd 2...
	I0719 07:31:46.970261    8318 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:31:46.970374    8318 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:31:46.971426    8318 out.go:298] Setting JSON to false
	I0719 07:31:46.987470    8318 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5475,"bootTime":1721394031,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:31:46.987532    8318 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:31:46.992391    8318 out.go:177] * [force-systemd-flag-710000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:31:46.999313    8318 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:31:46.999368    8318 notify.go:220] Checking for updates...
	I0719 07:31:47.005274    8318 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:31:47.008331    8318 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:31:47.011296    8318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:31:47.014280    8318 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:31:47.017383    8318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:31:47.020533    8318 config.go:182] Loaded profile config "force-systemd-env-580000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:31:47.020604    8318 config.go:182] Loaded profile config "multinode-023000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:31:47.020642    8318 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:31:47.025263    8318 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:31:47.031252    8318 start.go:297] selected driver: qemu2
	I0719 07:31:47.031258    8318 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:31:47.031264    8318 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:31:47.033475    8318 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:31:47.036272    8318 out.go:177] * Automatically selected the socket_vmnet network
	I0719 07:31:47.039373    8318 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 07:31:47.039387    8318 cni.go:84] Creating CNI manager for ""
	I0719 07:31:47.039394    8318 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:31:47.039400    8318 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 07:31:47.039431    8318 start.go:340] cluster config:
	{Name:force-systemd-flag-710000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-710000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:31:47.043174    8318 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:31:47.050308    8318 out.go:177] * Starting "force-systemd-flag-710000" primary control-plane node in "force-systemd-flag-710000" cluster
	I0719 07:31:47.054350    8318 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:31:47.054367    8318 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:31:47.054378    8318 cache.go:56] Caching tarball of preloaded images
	I0719 07:31:47.054430    8318 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:31:47.054436    8318 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:31:47.054515    8318 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/force-systemd-flag-710000/config.json ...
	I0719 07:31:47.054527    8318 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/force-systemd-flag-710000/config.json: {Name:mk1481b047d31f9e755f8dc10792c73ec67c5514 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:31:47.054741    8318 start.go:360] acquireMachinesLock for force-systemd-flag-710000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:31:47.054777    8318 start.go:364] duration metric: took 27.959µs to acquireMachinesLock for "force-systemd-flag-710000"
	I0719 07:31:47.054788    8318 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-710000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-710000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:31:47.054812    8318 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:31:47.063315    8318 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 07:31:47.081378    8318 start.go:159] libmachine.API.Create for "force-systemd-flag-710000" (driver="qemu2")
	I0719 07:31:47.081408    8318 client.go:168] LocalClient.Create starting
	I0719 07:31:47.081461    8318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:31:47.081492    8318 main.go:141] libmachine: Decoding PEM data...
	I0719 07:31:47.081501    8318 main.go:141] libmachine: Parsing certificate...
	I0719 07:31:47.081540    8318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:31:47.081567    8318 main.go:141] libmachine: Decoding PEM data...
	I0719 07:31:47.081577    8318 main.go:141] libmachine: Parsing certificate...
	I0719 07:31:47.081975    8318 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:31:47.196974    8318 main.go:141] libmachine: Creating SSH key...
	I0719 07:31:47.325943    8318 main.go:141] libmachine: Creating Disk image...
	I0719 07:31:47.325949    8318 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:31:47.326151    8318 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-flag-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-flag-710000/disk.qcow2
	I0719 07:31:47.335819    8318 main.go:141] libmachine: STDOUT: 
	I0719 07:31:47.335842    8318 main.go:141] libmachine: STDERR: 
	I0719 07:31:47.335889    8318 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-flag-710000/disk.qcow2 +20000M
	I0719 07:31:47.343642    8318 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:31:47.343656    8318 main.go:141] libmachine: STDERR: 
	I0719 07:31:47.343674    8318 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-flag-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-flag-710000/disk.qcow2
	I0719 07:31:47.343681    8318 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:31:47.343693    8318 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:31:47.343724    8318 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-flag-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-flag-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-flag-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:2d:79:f1:0e:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-flag-710000/disk.qcow2
	I0719 07:31:47.345335    8318 main.go:141] libmachine: STDOUT: 
	I0719 07:31:47.345351    8318 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:31:47.345368    8318 client.go:171] duration metric: took 263.958375ms to LocalClient.Create
	I0719 07:31:49.347532    8318 start.go:128] duration metric: took 2.292715708s to createHost
	I0719 07:31:49.347607    8318 start.go:83] releasing machines lock for "force-systemd-flag-710000", held for 2.292835708s
	W0719 07:31:49.347686    8318 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:31:49.365745    8318 out.go:177] * Deleting "force-systemd-flag-710000" in qemu2 ...
	W0719 07:31:49.381856    8318 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:31:49.381885    8318 start.go:729] Will try again in 5 seconds ...
	I0719 07:31:54.384049    8318 start.go:360] acquireMachinesLock for force-systemd-flag-710000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:31:54.384444    8318 start.go:364] duration metric: took 291.667µs to acquireMachinesLock for "force-systemd-flag-710000"
	I0719 07:31:54.384539    8318 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-710000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-710000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:31:54.384769    8318 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:31:54.396179    8318 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 07:31:54.445769    8318 start.go:159] libmachine.API.Create for "force-systemd-flag-710000" (driver="qemu2")
	I0719 07:31:54.445816    8318 client.go:168] LocalClient.Create starting
	I0719 07:31:54.445949    8318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:31:54.446008    8318 main.go:141] libmachine: Decoding PEM data...
	I0719 07:31:54.446026    8318 main.go:141] libmachine: Parsing certificate...
	I0719 07:31:54.446095    8318 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:31:54.446140    8318 main.go:141] libmachine: Decoding PEM data...
	I0719 07:31:54.446152    8318 main.go:141] libmachine: Parsing certificate...
	I0719 07:31:54.446656    8318 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:31:54.574003    8318 main.go:141] libmachine: Creating SSH key...
	I0719 07:31:54.657184    8318 main.go:141] libmachine: Creating Disk image...
	I0719 07:31:54.657191    8318 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:31:54.657372    8318 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-flag-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-flag-710000/disk.qcow2
	I0719 07:31:54.666551    8318 main.go:141] libmachine: STDOUT: 
	I0719 07:31:54.666569    8318 main.go:141] libmachine: STDERR: 
	I0719 07:31:54.666623    8318 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-flag-710000/disk.qcow2 +20000M
	I0719 07:31:54.674467    8318 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:31:54.674481    8318 main.go:141] libmachine: STDERR: 
	I0719 07:31:54.674494    8318 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-flag-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-flag-710000/disk.qcow2
	I0719 07:31:54.674503    8318 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:31:54.674513    8318 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:31:54.674545    8318 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-flag-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-flag-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-flag-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:cc:26:52:be:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-flag-710000/disk.qcow2
	I0719 07:31:54.676188    8318 main.go:141] libmachine: STDOUT: 
	I0719 07:31:54.676203    8318 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:31:54.676215    8318 client.go:171] duration metric: took 230.394167ms to LocalClient.Create
	I0719 07:31:56.678375    8318 start.go:128] duration metric: took 2.293592292s to createHost
	I0719 07:31:56.678439    8318 start.go:83] releasing machines lock for "force-systemd-flag-710000", held for 2.293987s
	W0719 07:31:56.678767    8318 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:31:56.686286    8318 out.go:177] 
	W0719 07:31:56.691377    8318 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:31:56.691401    8318 out.go:239] * 
	* 
	W0719 07:31:56.694056    8318 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:31:56.702325    8318 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-710000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-710000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-710000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.656833ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-710000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-710000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-710000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-19 07:31:56.797265 -0700 PDT m=+746.073110584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-710000 -n force-systemd-flag-710000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-710000 -n force-systemd-flag-710000: exit status 7 (33.032ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-710000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-710000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-710000
--- FAIL: TestForceSystemdFlag (9.95s)

                                                
                                    
x
+
TestForceSystemdEnv (10.05s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-580000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-580000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.866847542s)

                                                
                                                
-- stdout --
	* [force-systemd-env-580000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-580000" primary control-plane node in "force-systemd-env-580000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-580000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:31:41.860686    8286 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:31:41.860830    8286 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:31:41.860833    8286 out.go:304] Setting ErrFile to fd 2...
	I0719 07:31:41.860835    8286 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:31:41.860957    8286 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:31:41.862050    8286 out.go:298] Setting JSON to false
	I0719 07:31:41.879142    8286 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5470,"bootTime":1721394031,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:31:41.879211    8286 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:31:41.883726    8286 out.go:177] * [force-systemd-env-580000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:31:41.890754    8286 notify.go:220] Checking for updates...
	I0719 07:31:41.894712    8286 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:31:41.906683    8286 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:31:41.911690    8286 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:31:41.919625    8286 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:31:41.922691    8286 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:31:41.925725    8286 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0719 07:31:41.930040    8286 config.go:182] Loaded profile config "multinode-023000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:31:41.930090    8286 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:31:41.933683    8286 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:31:41.940685    8286 start.go:297] selected driver: qemu2
	I0719 07:31:41.940696    8286 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:31:41.940704    8286 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:31:41.942988    8286 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:31:41.946695    8286 out.go:177] * Automatically selected the socket_vmnet network
	I0719 07:31:41.950777    8286 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 07:31:41.950793    8286 cni.go:84] Creating CNI manager for ""
	I0719 07:31:41.950801    8286 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:31:41.950808    8286 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 07:31:41.950841    8286 start.go:340] cluster config:
	{Name:force-systemd-env-580000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-580000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:31:41.954230    8286 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:31:41.961721    8286 out.go:177] * Starting "force-systemd-env-580000" primary control-plane node in "force-systemd-env-580000" cluster
	I0719 07:31:41.965688    8286 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:31:41.965718    8286 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:31:41.965739    8286 cache.go:56] Caching tarball of preloaded images
	I0719 07:31:41.965804    8286 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:31:41.965809    8286 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:31:41.965869    8286 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/force-systemd-env-580000/config.json ...
	I0719 07:31:41.965880    8286 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/force-systemd-env-580000/config.json: {Name:mk2b62ca180a00f656dadfaeb5b3dff3d89de64b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:31:41.966085    8286 start.go:360] acquireMachinesLock for force-systemd-env-580000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:31:41.966119    8286 start.go:364] duration metric: took 26.625µs to acquireMachinesLock for "force-systemd-env-580000"
	I0719 07:31:41.966129    8286 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-580000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-580000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:31:41.966157    8286 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:31:41.970531    8286 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 07:31:41.985651    8286 start.go:159] libmachine.API.Create for "force-systemd-env-580000" (driver="qemu2")
	I0719 07:31:41.985679    8286 client.go:168] LocalClient.Create starting
	I0719 07:31:41.985757    8286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:31:41.985788    8286 main.go:141] libmachine: Decoding PEM data...
	I0719 07:31:41.985797    8286 main.go:141] libmachine: Parsing certificate...
	I0719 07:31:41.985844    8286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:31:41.985868    8286 main.go:141] libmachine: Decoding PEM data...
	I0719 07:31:41.985878    8286 main.go:141] libmachine: Parsing certificate...
	I0719 07:31:41.986251    8286 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:31:42.110990    8286 main.go:141] libmachine: Creating SSH key...
	I0719 07:31:42.177413    8286 main.go:141] libmachine: Creating Disk image...
	I0719 07:31:42.177422    8286 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:31:42.177627    8286 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-env-580000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-env-580000/disk.qcow2
	I0719 07:31:42.186963    8286 main.go:141] libmachine: STDOUT: 
	I0719 07:31:42.186980    8286 main.go:141] libmachine: STDERR: 
	I0719 07:31:42.187024    8286 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-env-580000/disk.qcow2 +20000M
	I0719 07:31:42.195028    8286 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:31:42.195043    8286 main.go:141] libmachine: STDERR: 
	I0719 07:31:42.195059    8286 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-env-580000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-env-580000/disk.qcow2
	I0719 07:31:42.195062    8286 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:31:42.195078    8286 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:31:42.195111    8286 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-env-580000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-env-580000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-env-580000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:28:3a:0a:03:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-env-580000/disk.qcow2
	I0719 07:31:42.196774    8286 main.go:141] libmachine: STDOUT: 
	I0719 07:31:42.196789    8286 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:31:42.196806    8286 client.go:171] duration metric: took 211.124708ms to LocalClient.Create
	I0719 07:31:44.199020    8286 start.go:128] duration metric: took 2.232836917s to createHost
	I0719 07:31:44.199099    8286 start.go:83] releasing machines lock for "force-systemd-env-580000", held for 2.232985s
	W0719 07:31:44.199168    8286 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:31:44.206303    8286 out.go:177] * Deleting "force-systemd-env-580000" in qemu2 ...
	W0719 07:31:44.230651    8286 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:31:44.230683    8286 start.go:729] Will try again in 5 seconds ...
	I0719 07:31:49.232834    8286 start.go:360] acquireMachinesLock for force-systemd-env-580000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:31:49.347769    8286 start.go:364] duration metric: took 114.788625ms to acquireMachinesLock for "force-systemd-env-580000"
	I0719 07:31:49.347916    8286 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-580000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-580000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:31:49.348119    8286 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:31:49.357752    8286 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 07:31:49.406086    8286 start.go:159] libmachine.API.Create for "force-systemd-env-580000" (driver="qemu2")
	I0719 07:31:49.406139    8286 client.go:168] LocalClient.Create starting
	I0719 07:31:49.406281    8286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:31:49.406339    8286 main.go:141] libmachine: Decoding PEM data...
	I0719 07:31:49.406352    8286 main.go:141] libmachine: Parsing certificate...
	I0719 07:31:49.406409    8286 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:31:49.406465    8286 main.go:141] libmachine: Decoding PEM data...
	I0719 07:31:49.406474    8286 main.go:141] libmachine: Parsing certificate...
	I0719 07:31:49.407083    8286 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:31:49.534922    8286 main.go:141] libmachine: Creating SSH key...
	I0719 07:31:49.636787    8286 main.go:141] libmachine: Creating Disk image...
	I0719 07:31:49.636793    8286 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:31:49.636991    8286 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-env-580000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-env-580000/disk.qcow2
	I0719 07:31:49.646193    8286 main.go:141] libmachine: STDOUT: 
	I0719 07:31:49.646219    8286 main.go:141] libmachine: STDERR: 
	I0719 07:31:49.646276    8286 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-env-580000/disk.qcow2 +20000M
	I0719 07:31:49.654146    8286 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:31:49.654164    8286 main.go:141] libmachine: STDERR: 
	I0719 07:31:49.654174    8286 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-env-580000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-env-580000/disk.qcow2
	I0719 07:31:49.654177    8286 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:31:49.654188    8286 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:31:49.654218    8286 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-env-580000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-env-580000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-env-580000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:10:ec:da:fe:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/force-systemd-env-580000/disk.qcow2
	I0719 07:31:49.655814    8286 main.go:141] libmachine: STDOUT: 
	I0719 07:31:49.655833    8286 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:31:49.655847    8286 client.go:171] duration metric: took 249.703125ms to LocalClient.Create
	I0719 07:31:51.658016    8286 start.go:128] duration metric: took 2.309875292s to createHost
	I0719 07:31:51.658071    8286 start.go:83] releasing machines lock for "force-systemd-env-580000", held for 2.310264958s
	W0719 07:31:51.658420    8286 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-580000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-580000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:31:51.667034    8286 out.go:177] 
	W0719 07:31:51.671010    8286 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:31:51.671038    8286 out.go:239] * 
	* 
	W0719 07:31:51.673535    8286 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:31:51.683787    8286 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-580000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-580000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-580000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.969708ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-580000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-580000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-580000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-19 07:31:51.77951 -0700 PDT m=+741.055322043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-580000 -n force-systemd-env-580000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-580000 -n force-systemd-env-580000: exit status 7 (32.328375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-580000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-580000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-580000
--- FAIL: TestForceSystemdEnv (10.05s)

                                                
                                    
x
+
TestErrorSpam/setup (9.92s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-848000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-848000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 --driver=qemu2 : exit status 80 (9.920464625s)

                                                
                                                
-- stdout --
	* [nospam-848000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-848000" primary control-plane node in "nospam-848000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-848000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-848000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-848000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-848000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-848000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19302
- KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-848000" primary control-plane node in "nospam-848000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-848000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-848000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.92s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.85s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-971000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-971000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.774653833s)

                                                
                                                
-- stdout --
	* [functional-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-971000" primary control-plane node in "functional-971000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-971000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50999 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50999 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50999 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-971000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-971000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19302
- KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-971000" primary control-plane node in "functional-971000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-971000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:50999 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:50999 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:50999 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-971000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000: exit status 7 (69.2165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.85s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-971000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-971000 --alsologtostderr -v=8: exit status 80 (5.179489375s)

                                                
                                                
-- stdout --
	* [functional-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-971000" primary control-plane node in "functional-971000" cluster
	* Restarting existing qemu2 VM for "functional-971000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-971000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:21:03.980314    6730 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:21:03.980432    6730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:21:03.980435    6730 out.go:304] Setting ErrFile to fd 2...
	I0719 07:21:03.980438    6730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:21:03.980553    6730 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:21:03.981539    6730 out.go:298] Setting JSON to false
	I0719 07:21:03.997578    6730 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4832,"bootTime":1721394031,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:21:03.997652    6730 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:21:04.001630    6730 out.go:177] * [functional-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:21:04.008669    6730 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:21:04.008757    6730 notify.go:220] Checking for updates...
	I0719 07:21:04.014593    6730 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:21:04.017647    6730 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:21:04.018983    6730 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:21:04.021633    6730 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:21:04.024612    6730 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:21:04.027904    6730 config.go:182] Loaded profile config "functional-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:21:04.027967    6730 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:21:04.032552    6730 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 07:21:04.039649    6730 start.go:297] selected driver: qemu2
	I0719 07:21:04.039660    6730 start.go:901] validating driver "qemu2" against &{Name:functional-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-971000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:21:04.039730    6730 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:21:04.042053    6730 cni.go:84] Creating CNI manager for ""
	I0719 07:21:04.042071    6730 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:21:04.042110    6730 start.go:340] cluster config:
	{Name:functional-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-971000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:21:04.045596    6730 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:21:04.054626    6730 out.go:177] * Starting "functional-971000" primary control-plane node in "functional-971000" cluster
	I0719 07:21:04.058616    6730 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:21:04.058632    6730 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:21:04.058643    6730 cache.go:56] Caching tarball of preloaded images
	I0719 07:21:04.058700    6730 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:21:04.058706    6730 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:21:04.058760    6730 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/functional-971000/config.json ...
	I0719 07:21:04.059181    6730 start.go:360] acquireMachinesLock for functional-971000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:21:04.059208    6730 start.go:364] duration metric: took 21.459µs to acquireMachinesLock for "functional-971000"
	I0719 07:21:04.059216    6730 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:21:04.059221    6730 fix.go:54] fixHost starting: 
	I0719 07:21:04.059333    6730 fix.go:112] recreateIfNeeded on functional-971000: state=Stopped err=<nil>
	W0719 07:21:04.059341    6730 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:21:04.066518    6730 out.go:177] * Restarting existing qemu2 VM for "functional-971000" ...
	I0719 07:21:04.070608    6730 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:21:04.070648    6730 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:74:2e:60:e4:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/disk.qcow2
	I0719 07:21:04.072733    6730 main.go:141] libmachine: STDOUT: 
	I0719 07:21:04.072751    6730 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:21:04.072780    6730 fix.go:56] duration metric: took 13.557542ms for fixHost
	I0719 07:21:04.072784    6730 start.go:83] releasing machines lock for "functional-971000", held for 13.572ms
	W0719 07:21:04.072790    6730 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:21:04.072819    6730 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:21:04.072823    6730 start.go:729] Will try again in 5 seconds ...
	I0719 07:21:09.074929    6730 start.go:360] acquireMachinesLock for functional-971000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:21:09.075373    6730 start.go:364] duration metric: took 319.625µs to acquireMachinesLock for "functional-971000"
	I0719 07:21:09.075493    6730 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:21:09.075509    6730 fix.go:54] fixHost starting: 
	I0719 07:21:09.076221    6730 fix.go:112] recreateIfNeeded on functional-971000: state=Stopped err=<nil>
	W0719 07:21:09.076250    6730 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:21:09.080683    6730 out.go:177] * Restarting existing qemu2 VM for "functional-971000" ...
	I0719 07:21:09.086519    6730 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:21:09.086778    6730 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:74:2e:60:e4:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/disk.qcow2
	I0719 07:21:09.095805    6730 main.go:141] libmachine: STDOUT: 
	I0719 07:21:09.095863    6730 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:21:09.095929    6730 fix.go:56] duration metric: took 20.418292ms for fixHost
	I0719 07:21:09.095941    6730 start.go:83] releasing machines lock for "functional-971000", held for 20.546ms
	W0719 07:21:09.096109    6730 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-971000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-971000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:21:09.102574    6730 out.go:177] 
	W0719 07:21:09.106676    6730 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:21:09.106701    6730 out.go:239] * 
	* 
	W0719 07:21:09.109254    6730 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:21:09.116607    6730 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-971000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.181380167s for "functional-971000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000: exit status 7 (70.164625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (32.350958ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-971000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000: exit status 7 (30.929ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-971000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-971000 get po -A: exit status 1 (26.285584ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-971000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-971000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-971000\n"*: args "kubectl --context functional-971000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-971000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000: exit status 7 (29.795625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh sudo crictl images: exit status 83 (40.898208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-971000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (40.718708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-971000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (38.968667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (41.768833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-971000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 kubectl -- --context functional-971000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 kubectl -- --context functional-971000 get pods: exit status 1 (706.895625ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-971000
	* no server found for cluster "functional-971000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-971000 kubectl -- --context functional-971000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000: exit status 7 (32.165292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.98s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-971000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-971000 get pods: exit status 1 (945.119208ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-971000
	* no server found for cluster "functional-971000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-971000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000: exit status 7 (29.748209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.98s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-971000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-971000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.180383041s)

                                                
                                                
-- stdout --
	* [functional-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-971000" primary control-plane node in "functional-971000" cluster
	* Restarting existing qemu2 VM for "functional-971000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-971000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-971000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-971000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.181036666s for "functional-971000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000: exit status 7 (69.00275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-971000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-971000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.758458ms)

                                                
                                                
** stderr ** 
	error: context "functional-971000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-971000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000: exit status 7 (30.500458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 logs: exit status 83 (77.58275ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-549000 | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT |                     |
	|         | -p download-only-549000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT | 19 Jul 24 07:19 PDT |
	| delete  | -p download-only-549000                                                  | download-only-549000 | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT | 19 Jul 24 07:19 PDT |
	| start   | -o=json --download-only                                                  | download-only-746000 | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT |                     |
	|         | -p download-only-746000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
	| delete  | -p download-only-746000                                                  | download-only-746000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
	| start   | -o=json --download-only                                                  | download-only-899000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
	|         | -p download-only-899000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
	| delete  | -p download-only-899000                                                  | download-only-899000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
	| delete  | -p download-only-549000                                                  | download-only-549000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
	| delete  | -p download-only-746000                                                  | download-only-746000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
	| delete  | -p download-only-899000                                                  | download-only-899000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
	| start   | --download-only -p                                                       | binary-mirror-997000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
	|         | binary-mirror-997000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:50963                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-997000                                                  | binary-mirror-997000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
	| addons  | enable dashboard -p                                                      | addons-047000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
	|         | addons-047000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-047000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
	|         | addons-047000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-047000 --wait=true                                             | addons-047000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-047000                                                         | addons-047000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
	| start   | -p nospam-848000 -n=1 --memory=2250 --wait=false                         | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-848000                                                         | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
	| start   | -p functional-971000                                                     | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-971000                                                     | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-971000 cache add                                              | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-971000 cache add                                              | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-971000 cache add                                              | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-971000 cache add                                              | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
	|         | minikube-local-cache-test:functional-971000                              |                      |         |         |                     |                     |
	| cache   | functional-971000 cache delete                                           | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
	|         | minikube-local-cache-test:functional-971000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
	| ssh     | functional-971000 ssh sudo                                               | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-971000                                                        | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-971000 ssh                                                    | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-971000 cache reload                                           | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
	| ssh     | functional-971000 ssh                                                    | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-971000 kubectl --                                             | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT |                     |
	|         | --context functional-971000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-971000                                                     | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 07:21:14
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 07:21:14.183422    6805 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:21:14.183535    6805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:21:14.183537    6805 out.go:304] Setting ErrFile to fd 2...
	I0719 07:21:14.183539    6805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:21:14.183656    6805 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:21:14.184679    6805 out.go:298] Setting JSON to false
	I0719 07:21:14.200504    6805 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4843,"bootTime":1721394031,"procs":445,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:21:14.200572    6805 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:21:14.205615    6805 out.go:177] * [functional-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:21:14.213621    6805 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:21:14.213677    6805 notify.go:220] Checking for updates...
	I0719 07:21:14.219561    6805 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:21:14.222626    6805 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:21:14.225610    6805 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:21:14.226914    6805 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:21:14.229605    6805 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:21:14.232964    6805 config.go:182] Loaded profile config "functional-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:21:14.233011    6805 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:21:14.236534    6805 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 07:21:14.245665    6805 start.go:297] selected driver: qemu2
	I0719 07:21:14.245668    6805 start.go:901] validating driver "qemu2" against &{Name:functional-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-971000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:21:14.245719    6805 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:21:14.247976    6805 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:21:14.247997    6805 cni.go:84] Creating CNI manager for ""
	I0719 07:21:14.248004    6805 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:21:14.248055    6805 start.go:340] cluster config:
	{Name:functional-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-971000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:21:14.251513    6805 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:21:14.254696    6805 out.go:177] * Starting "functional-971000" primary control-plane node in "functional-971000" cluster
	I0719 07:21:14.261631    6805 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:21:14.261649    6805 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:21:14.261661    6805 cache.go:56] Caching tarball of preloaded images
	I0719 07:21:14.261724    6805 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:21:14.261729    6805 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:21:14.261793    6805 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/functional-971000/config.json ...
	I0719 07:21:14.262104    6805 start.go:360] acquireMachinesLock for functional-971000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:21:14.262137    6805 start.go:364] duration metric: took 28.833µs to acquireMachinesLock for "functional-971000"
	I0719 07:21:14.262144    6805 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:21:14.262148    6805 fix.go:54] fixHost starting: 
	I0719 07:21:14.262274    6805 fix.go:112] recreateIfNeeded on functional-971000: state=Stopped err=<nil>
	W0719 07:21:14.262280    6805 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:21:14.271548    6805 out.go:177] * Restarting existing qemu2 VM for "functional-971000" ...
	I0719 07:21:14.277604    6805 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:21:14.277657    6805 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:74:2e:60:e4:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/disk.qcow2
	I0719 07:21:14.279759    6805 main.go:141] libmachine: STDOUT: 
	I0719 07:21:14.279777    6805 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:21:14.279809    6805 fix.go:56] duration metric: took 17.65975ms for fixHost
	I0719 07:21:14.279811    6805 start.go:83] releasing machines lock for "functional-971000", held for 17.671584ms
	W0719 07:21:14.279817    6805 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:21:14.279857    6805 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:21:14.279862    6805 start.go:729] Will try again in 5 seconds ...
	I0719 07:21:19.282053    6805 start.go:360] acquireMachinesLock for functional-971000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:21:19.282479    6805 start.go:364] duration metric: took 351.459µs to acquireMachinesLock for "functional-971000"
	I0719 07:21:19.282605    6805 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:21:19.282620    6805 fix.go:54] fixHost starting: 
	I0719 07:21:19.283306    6805 fix.go:112] recreateIfNeeded on functional-971000: state=Stopped err=<nil>
	W0719 07:21:19.283323    6805 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:21:19.287699    6805 out.go:177] * Restarting existing qemu2 VM for "functional-971000" ...
	I0719 07:21:19.292743    6805 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:21:19.292941    6805 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:74:2e:60:e4:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/disk.qcow2
	I0719 07:21:19.301970    6805 main.go:141] libmachine: STDOUT: 
	I0719 07:21:19.302014    6805 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:21:19.302081    6805 fix.go:56] duration metric: took 19.466583ms for fixHost
	I0719 07:21:19.302094    6805 start.go:83] releasing machines lock for "functional-971000", held for 19.600209ms
	W0719 07:21:19.302279    6805 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-971000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:21:19.310652    6805 out.go:177] 
	W0719 07:21:19.314651    6805 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:21:19.314679    6805 out.go:239] * 
	W0719 07:21:19.316980    6805 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:21:19.324702    6805 out.go:177] 
	
	
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-971000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-549000 | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT |                     |
|         | -p download-only-549000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT | 19 Jul 24 07:19 PDT |
| delete  | -p download-only-549000                                                  | download-only-549000 | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT | 19 Jul 24 07:19 PDT |
| start   | -o=json --download-only                                                  | download-only-746000 | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT |                     |
|         | -p download-only-746000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
| delete  | -p download-only-746000                                                  | download-only-746000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
| start   | -o=json --download-only                                                  | download-only-899000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | -p download-only-899000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
| delete  | -p download-only-899000                                                  | download-only-899000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
| delete  | -p download-only-549000                                                  | download-only-549000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
| delete  | -p download-only-746000                                                  | download-only-746000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
| delete  | -p download-only-899000                                                  | download-only-899000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
| start   | --download-only -p                                                       | binary-mirror-997000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | binary-mirror-997000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50963                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-997000                                                  | binary-mirror-997000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
| addons  | enable dashboard -p                                                      | addons-047000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | addons-047000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-047000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | addons-047000                                                            |                      |         |         |                     |                     |
| start   | -p addons-047000 --wait=true                                             | addons-047000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-047000                                                         | addons-047000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
| start   | -p nospam-848000 -n=1 --memory=2250 --wait=false                         | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-848000                                                         | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
| start   | -p functional-971000                                                     | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-971000                                                     | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-971000 cache add                                              | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-971000 cache add                                              | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-971000 cache add                                              | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-971000 cache add                                              | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
|         | minikube-local-cache-test:functional-971000                              |                      |         |         |                     |                     |
| cache   | functional-971000 cache delete                                           | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
|         | minikube-local-cache-test:functional-971000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
| ssh     | functional-971000 ssh sudo                                               | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-971000                                                        | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-971000 ssh                                                    | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-971000 cache reload                                           | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
| ssh     | functional-971000 ssh                                                    | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-971000 kubectl --                                             | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT |                     |
|         | --context functional-971000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-971000                                                     | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/19 07:21:14
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0719 07:21:14.183422    6805 out.go:291] Setting OutFile to fd 1 ...
I0719 07:21:14.183535    6805 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:21:14.183537    6805 out.go:304] Setting ErrFile to fd 2...
I0719 07:21:14.183539    6805 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:21:14.183656    6805 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
I0719 07:21:14.184679    6805 out.go:298] Setting JSON to false
I0719 07:21:14.200504    6805 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4843,"bootTime":1721394031,"procs":445,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0719 07:21:14.200572    6805 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0719 07:21:14.205615    6805 out.go:177] * [functional-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0719 07:21:14.213621    6805 out.go:177]   - MINIKUBE_LOCATION=19302
I0719 07:21:14.213677    6805 notify.go:220] Checking for updates...
I0719 07:21:14.219561    6805 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
I0719 07:21:14.222626    6805 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0719 07:21:14.225610    6805 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0719 07:21:14.226914    6805 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
I0719 07:21:14.229605    6805 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0719 07:21:14.232964    6805 config.go:182] Loaded profile config "functional-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 07:21:14.233011    6805 driver.go:392] Setting default libvirt URI to qemu:///system
I0719 07:21:14.236534    6805 out.go:177] * Using the qemu2 driver based on existing profile
I0719 07:21:14.245665    6805 start.go:297] selected driver: qemu2
I0719 07:21:14.245668    6805 start.go:901] validating driver "qemu2" against &{Name:functional-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-971000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0719 07:21:14.245719    6805 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0719 07:21:14.247976    6805 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0719 07:21:14.247997    6805 cni.go:84] Creating CNI manager for ""
I0719 07:21:14.248004    6805 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0719 07:21:14.248055    6805 start.go:340] cluster config:
{Name:functional-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-971000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0719 07:21:14.251513    6805 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0719 07:21:14.254696    6805 out.go:177] * Starting "functional-971000" primary control-plane node in "functional-971000" cluster
I0719 07:21:14.261631    6805 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0719 07:21:14.261649    6805 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0719 07:21:14.261661    6805 cache.go:56] Caching tarball of preloaded images
I0719 07:21:14.261724    6805 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0719 07:21:14.261729    6805 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0719 07:21:14.261793    6805 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/functional-971000/config.json ...
I0719 07:21:14.262104    6805 start.go:360] acquireMachinesLock for functional-971000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0719 07:21:14.262137    6805 start.go:364] duration metric: took 28.833µs to acquireMachinesLock for "functional-971000"
I0719 07:21:14.262144    6805 start.go:96] Skipping create...Using existing machine configuration
I0719 07:21:14.262148    6805 fix.go:54] fixHost starting: 
I0719 07:21:14.262274    6805 fix.go:112] recreateIfNeeded on functional-971000: state=Stopped err=<nil>
W0719 07:21:14.262280    6805 fix.go:138] unexpected machine state, will restart: <nil>
I0719 07:21:14.271548    6805 out.go:177] * Restarting existing qemu2 VM for "functional-971000" ...
I0719 07:21:14.277604    6805 qemu.go:418] Using hvf for hardware acceleration
I0719 07:21:14.277657    6805 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:74:2e:60:e4:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/disk.qcow2
I0719 07:21:14.279759    6805 main.go:141] libmachine: STDOUT: 
I0719 07:21:14.279777    6805 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0719 07:21:14.279809    6805 fix.go:56] duration metric: took 17.65975ms for fixHost
I0719 07:21:14.279811    6805 start.go:83] releasing machines lock for "functional-971000", held for 17.671584ms
W0719 07:21:14.279817    6805 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0719 07:21:14.279857    6805 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0719 07:21:14.279862    6805 start.go:729] Will try again in 5 seconds ...
I0719 07:21:19.282053    6805 start.go:360] acquireMachinesLock for functional-971000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0719 07:21:19.282479    6805 start.go:364] duration metric: took 351.459µs to acquireMachinesLock for "functional-971000"
I0719 07:21:19.282605    6805 start.go:96] Skipping create...Using existing machine configuration
I0719 07:21:19.282620    6805 fix.go:54] fixHost starting: 
I0719 07:21:19.283306    6805 fix.go:112] recreateIfNeeded on functional-971000: state=Stopped err=<nil>
W0719 07:21:19.283323    6805 fix.go:138] unexpected machine state, will restart: <nil>
I0719 07:21:19.287699    6805 out.go:177] * Restarting existing qemu2 VM for "functional-971000" ...
I0719 07:21:19.292743    6805 qemu.go:418] Using hvf for hardware acceleration
I0719 07:21:19.292941    6805 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:74:2e:60:e4:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/disk.qcow2
I0719 07:21:19.301970    6805 main.go:141] libmachine: STDOUT: 
I0719 07:21:19.302014    6805 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0719 07:21:19.302081    6805 fix.go:56] duration metric: took 19.466583ms for fixHost
I0719 07:21:19.302094    6805 start.go:83] releasing machines lock for "functional-971000", held for 19.600209ms
W0719 07:21:19.302279    6805 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-971000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0719 07:21:19.310652    6805 out.go:177] 
W0719 07:21:19.314651    6805 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0719 07:21:19.314679    6805 out.go:239] * 
W0719 07:21:19.316980    6805 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0719 07:21:19.324702    6805 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-971000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-971000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd1563758691/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-549000 | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT |                     |
|         | -p download-only-549000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT | 19 Jul 24 07:19 PDT |
| delete  | -p download-only-549000                                                  | download-only-549000 | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT | 19 Jul 24 07:19 PDT |
| start   | -o=json --download-only                                                  | download-only-746000 | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT |                     |
|         | -p download-only-746000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
| delete  | -p download-only-746000                                                  | download-only-746000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
| start   | -o=json --download-only                                                  | download-only-899000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | -p download-only-899000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-beta.0                                      |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
| delete  | -p download-only-899000                                                  | download-only-899000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
| delete  | -p download-only-549000                                                  | download-only-549000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
| delete  | -p download-only-746000                                                  | download-only-746000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
| delete  | -p download-only-899000                                                  | download-only-899000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
| start   | --download-only -p                                                       | binary-mirror-997000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | binary-mirror-997000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50963                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-997000                                                  | binary-mirror-997000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
| addons  | enable dashboard -p                                                      | addons-047000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | addons-047000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-047000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | addons-047000                                                            |                      |         |         |                     |                     |
| start   | -p addons-047000 --wait=true                                             | addons-047000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-047000                                                         | addons-047000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
| start   | -p nospam-848000 -n=1 --memory=2250 --wait=false                         | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-848000 --log_dir                                                  | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-848000                                                         | nospam-848000        | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
| start   | -p functional-971000                                                     | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-971000                                                     | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-971000 cache add                                              | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-971000 cache add                                              | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-971000 cache add                                              | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-971000 cache add                                              | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
|         | minikube-local-cache-test:functional-971000                              |                      |         |         |                     |                     |
| cache   | functional-971000 cache delete                                           | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
|         | minikube-local-cache-test:functional-971000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
| ssh     | functional-971000 ssh sudo                                               | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-971000                                                        | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-971000 ssh                                                    | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-971000 cache reload                                           | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
| ssh     | functional-971000 ssh                                                    | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT | 19 Jul 24 07:21 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-971000 kubectl --                                             | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT |                     |
|         | --context functional-971000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-971000                                                     | functional-971000    | jenkins | v1.33.1 | 19 Jul 24 07:21 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/19 07:21:14
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0719 07:21:14.183422    6805 out.go:291] Setting OutFile to fd 1 ...
I0719 07:21:14.183535    6805 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:21:14.183537    6805 out.go:304] Setting ErrFile to fd 2...
I0719 07:21:14.183539    6805 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:21:14.183656    6805 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
I0719 07:21:14.184679    6805 out.go:298] Setting JSON to false
I0719 07:21:14.200504    6805 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4843,"bootTime":1721394031,"procs":445,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0719 07:21:14.200572    6805 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0719 07:21:14.205615    6805 out.go:177] * [functional-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0719 07:21:14.213621    6805 out.go:177]   - MINIKUBE_LOCATION=19302
I0719 07:21:14.213677    6805 notify.go:220] Checking for updates...
I0719 07:21:14.219561    6805 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
I0719 07:21:14.222626    6805 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0719 07:21:14.225610    6805 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0719 07:21:14.226914    6805 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
I0719 07:21:14.229605    6805 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0719 07:21:14.232964    6805 config.go:182] Loaded profile config "functional-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 07:21:14.233011    6805 driver.go:392] Setting default libvirt URI to qemu:///system
I0719 07:21:14.236534    6805 out.go:177] * Using the qemu2 driver based on existing profile
I0719 07:21:14.245665    6805 start.go:297] selected driver: qemu2
I0719 07:21:14.245668    6805 start.go:901] validating driver "qemu2" against &{Name:functional-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-971000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0719 07:21:14.245719    6805 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0719 07:21:14.247976    6805 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0719 07:21:14.247997    6805 cni.go:84] Creating CNI manager for ""
I0719 07:21:14.248004    6805 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0719 07:21:14.248055    6805 start.go:340] cluster config:
{Name:functional-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-971000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0719 07:21:14.251513    6805 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0719 07:21:14.254696    6805 out.go:177] * Starting "functional-971000" primary control-plane node in "functional-971000" cluster
I0719 07:21:14.261631    6805 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0719 07:21:14.261649    6805 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0719 07:21:14.261661    6805 cache.go:56] Caching tarball of preloaded images
I0719 07:21:14.261724    6805 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0719 07:21:14.261729    6805 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0719 07:21:14.261793    6805 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/functional-971000/config.json ...
I0719 07:21:14.262104    6805 start.go:360] acquireMachinesLock for functional-971000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0719 07:21:14.262137    6805 start.go:364] duration metric: took 28.833µs to acquireMachinesLock for "functional-971000"
I0719 07:21:14.262144    6805 start.go:96] Skipping create...Using existing machine configuration
I0719 07:21:14.262148    6805 fix.go:54] fixHost starting: 
I0719 07:21:14.262274    6805 fix.go:112] recreateIfNeeded on functional-971000: state=Stopped err=<nil>
W0719 07:21:14.262280    6805 fix.go:138] unexpected machine state, will restart: <nil>
I0719 07:21:14.271548    6805 out.go:177] * Restarting existing qemu2 VM for "functional-971000" ...
I0719 07:21:14.277604    6805 qemu.go:418] Using hvf for hardware acceleration
I0719 07:21:14.277657    6805 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:74:2e:60:e4:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/disk.qcow2
I0719 07:21:14.279759    6805 main.go:141] libmachine: STDOUT: 
I0719 07:21:14.279777    6805 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0719 07:21:14.279809    6805 fix.go:56] duration metric: took 17.65975ms for fixHost
I0719 07:21:14.279811    6805 start.go:83] releasing machines lock for "functional-971000", held for 17.671584ms
W0719 07:21:14.279817    6805 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0719 07:21:14.279857    6805 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0719 07:21:14.279862    6805 start.go:729] Will try again in 5 seconds ...
I0719 07:21:19.282053    6805 start.go:360] acquireMachinesLock for functional-971000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0719 07:21:19.282479    6805 start.go:364] duration metric: took 351.459µs to acquireMachinesLock for "functional-971000"
I0719 07:21:19.282605    6805 start.go:96] Skipping create...Using existing machine configuration
I0719 07:21:19.282620    6805 fix.go:54] fixHost starting: 
I0719 07:21:19.283306    6805 fix.go:112] recreateIfNeeded on functional-971000: state=Stopped err=<nil>
W0719 07:21:19.283323    6805 fix.go:138] unexpected machine state, will restart: <nil>
I0719 07:21:19.287699    6805 out.go:177] * Restarting existing qemu2 VM for "functional-971000" ...
I0719 07:21:19.292743    6805 qemu.go:418] Using hvf for hardware acceleration
I0719 07:21:19.292941    6805 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:74:2e:60:e4:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/functional-971000/disk.qcow2
I0719 07:21:19.301970    6805 main.go:141] libmachine: STDOUT: 
I0719 07:21:19.302014    6805 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0719 07:21:19.302081    6805 fix.go:56] duration metric: took 19.466583ms for fixHost
I0719 07:21:19.302094    6805 start.go:83] releasing machines lock for "functional-971000", held for 19.600209ms
W0719 07:21:19.302279    6805 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-971000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0719 07:21:19.310652    6805 out.go:177] 
W0719 07:21:19.314651    6805 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0719 07:21:19.314679    6805 out.go:239] * 
W0719 07:21:19.316980    6805 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0719 07:21:19.324702    6805 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-971000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-971000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.197584ms)

                                                
                                                
** stderr ** 
	error: context "functional-971000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-971000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-971000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-971000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-971000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-971000 --alsologtostderr -v=1] stderr:
I0719 07:21:58.122432    7120 out.go:291] Setting OutFile to fd 1 ...
I0719 07:21:58.122876    7120 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:21:58.122881    7120 out.go:304] Setting ErrFile to fd 2...
I0719 07:21:58.122883    7120 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:21:58.123072    7120 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
I0719 07:21:58.123286    7120 mustload.go:65] Loading cluster: functional-971000
I0719 07:21:58.123488    7120 config.go:182] Loaded profile config "functional-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 07:21:58.127517    7120 out.go:177] * The control-plane node functional-971000 host is not running: state=Stopped
I0719 07:21:58.131486    7120 out.go:177]   To start a cluster, run: "minikube start -p functional-971000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000: exit status 7 (41.089292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 status: exit status 7 (29.906042ms)

                                                
                                                
-- stdout --
	functional-971000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-971000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (29.03025ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-971000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 status -o json: exit status 7 (28.536875ms)

                                                
                                                
-- stdout --
	{"Name":"functional-971000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-971000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000: exit status 7 (29.103333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-971000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-971000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.645292ms)

                                                
                                                
** stderr ** 
	error: context "functional-971000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-971000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-971000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-971000 describe po hello-node-connect: exit status 1 (25.944459ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-971000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-971000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-971000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-971000 logs -l app=hello-node-connect: exit status 1 (25.256708ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-971000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-971000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-971000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-971000 describe svc hello-node-connect: exit status 1 (26.060792ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-971000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-971000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000: exit status 7 (30.549417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-971000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000: exit status 7 (30.312375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "echo hello": exit status 83 (42.606833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-971000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-971000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-971000\"\n"*. args "out/minikube-darwin-arm64 -p functional-971000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "cat /etc/hostname": exit status 83 (39.841834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-971000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-971000"- but got *"* The control-plane node functional-971000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-971000\"\n"*. args "out/minikube-darwin-arm64 -p functional-971000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000: exit status 7 (29.575208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (54.302458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-971000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh -n functional-971000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh -n functional-971000 "sudo cat /home/docker/cp-test.txt": exit status 83 (43.03775ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-971000 ssh -n functional-971000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-971000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-971000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 cp functional-971000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd394982892/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 cp functional-971000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd394982892/001/cp-test.txt: exit status 83 (48.570791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-971000 cp functional-971000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd394982892/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh -n functional-971000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh -n functional-971000 "sudo cat /home/docker/cp-test.txt": exit status 83 (46.904625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-971000 ssh -n functional-971000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd394982892/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-971000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-971000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (47.77975ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-971000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh -n functional-971000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh -n functional-971000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (39.984625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-971000 ssh -n functional-971000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-971000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-971000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/6473/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "sudo cat /etc/test/nested/copy/6473/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "sudo cat /etc/test/nested/copy/6473/hosts": exit status 83 (40.546042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-971000 ssh "sudo cat /etc/test/nested/copy/6473/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-971000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-971000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-971000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-971000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000: exit status 7 (30.474916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/6473.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "sudo cat /etc/ssl/certs/6473.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "sudo cat /etc/ssl/certs/6473.pem": exit status 83 (41.474ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/6473.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-971000 ssh \"sudo cat /etc/ssl/certs/6473.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/6473.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-971000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-971000"
	"""
)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/6473.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "sudo cat /usr/share/ca-certificates/6473.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "sudo cat /usr/share/ca-certificates/6473.pem": exit status 83 (40.81225ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/6473.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-971000 ssh \"sudo cat /usr/share/ca-certificates/6473.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/6473.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-971000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-971000"
	"""
)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (40.6805ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-971000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-971000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-971000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/64732.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "sudo cat /etc/ssl/certs/64732.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "sudo cat /etc/ssl/certs/64732.pem": exit status 83 (39.539833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/64732.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-971000 ssh \"sudo cat /etc/ssl/certs/64732.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/64732.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-971000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-971000"
	"""
)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/64732.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "sudo cat /usr/share/ca-certificates/64732.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "sudo cat /usr/share/ca-certificates/64732.pem": exit status 83 (40.716ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/64732.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-971000 ssh \"sudo cat /usr/share/ca-certificates/64732.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/64732.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-971000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-971000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (41.882458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-971000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-971000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-971000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000: exit status 7 (29.939208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-971000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-971000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.114917ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-971000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-971000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-971000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-971000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-971000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-971000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-971000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-971000 -n functional-971000: exit status 7 (30.596708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-971000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "sudo systemctl is-active crio": exit status 83 (41.371375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-971000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-971000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 version -o=json --components: exit status 83 (40.8905ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-971000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-971000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-971000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-971000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-971000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-971000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-971000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-971000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-971000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-971000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-971000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-971000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-971000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-971000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-971000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-971000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-971000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-971000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-971000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-971000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-971000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-971000 image ls --format short --alsologtostderr:
I0719 07:21:58.523255    7135 out.go:291] Setting OutFile to fd 1 ...
I0719 07:21:58.523420    7135 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:21:58.523423    7135 out.go:304] Setting ErrFile to fd 2...
I0719 07:21:58.523425    7135 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:21:58.523572    7135 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
I0719 07:21:58.524009    7135 config.go:182] Loaded profile config "functional-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 07:21:58.524070    7135 config.go:182] Loaded profile config "functional-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-971000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-971000 image ls --format table --alsologtostderr:
I0719 07:21:58.745844    7147 out.go:291] Setting OutFile to fd 1 ...
I0719 07:21:58.745997    7147 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:21:58.746001    7147 out.go:304] Setting ErrFile to fd 2...
I0719 07:21:58.746003    7147 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:21:58.746159    7147 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
I0719 07:21:58.746577    7147 config.go:182] Loaded profile config "functional-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 07:21:58.746650    7147 config.go:182] Loaded profile config "functional-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-971000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-971000 image ls --format json --alsologtostderr:
I0719 07:21:58.711109    7145 out.go:291] Setting OutFile to fd 1 ...
I0719 07:21:58.711264    7145 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:21:58.711267    7145 out.go:304] Setting ErrFile to fd 2...
I0719 07:21:58.711270    7145 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:21:58.711381    7145 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
I0719 07:21:58.711801    7145 config.go:182] Loaded profile config "functional-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 07:21:58.711865    7145 config.go:182] Loaded profile config "functional-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-971000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-971000 image ls --format yaml --alsologtostderr:
I0719 07:21:58.559385    7137 out.go:291] Setting OutFile to fd 1 ...
I0719 07:21:58.559557    7137 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:21:58.559560    7137 out.go:304] Setting ErrFile to fd 2...
I0719 07:21:58.559563    7137 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:21:58.559690    7137 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
I0719 07:21:58.560119    7137 config.go:182] Loaded profile config "functional-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 07:21:58.560183    7137 config.go:182] Loaded profile config "functional-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh pgrep buildkitd: exit status 83 (42.820417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 image build -t localhost/my-image:functional-971000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-971000 image build -t localhost/my-image:functional-971000 testdata/build --alsologtostderr:
I0719 07:21:58.638392    7141 out.go:291] Setting OutFile to fd 1 ...
I0719 07:21:58.638829    7141 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:21:58.638833    7141 out.go:304] Setting ErrFile to fd 2...
I0719 07:21:58.638835    7141 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:21:58.639009    7141 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
I0719 07:21:58.639406    7141 config.go:182] Loaded profile config "functional-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 07:21:58.639887    7141 config.go:182] Loaded profile config "functional-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 07:21:58.640144    7141 build_images.go:133] succeeded building to: 
I0719 07:21:58.640150    7141 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 image ls
functional_test.go:442: expected "localhost/my-image:functional-971000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-971000 docker-env) && out/minikube-darwin-arm64 status -p functional-971000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-971000 docker-env) && out/minikube-darwin-arm64 status -p functional-971000": exit status 1 (43.922083ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 update-context --alsologtostderr -v=2: exit status 83 (41.870584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:21:58.395770    7129 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:21:58.396201    7129 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:21:58.396205    7129 out.go:304] Setting ErrFile to fd 2...
	I0719 07:21:58.396207    7129 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:21:58.396387    7129 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:21:58.396661    7129 mustload.go:65] Loading cluster: functional-971000
	I0719 07:21:58.396844    7129 config.go:182] Loaded profile config "functional-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:21:58.401222    7129 out.go:177] * The control-plane node functional-971000 host is not running: state=Stopped
	I0719 07:21:58.405014    7129 out.go:177]   To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-971000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-971000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-971000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 update-context --alsologtostderr -v=2: exit status 83 (42.695333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:21:58.480958    7133 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:21:58.481099    7133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:21:58.481102    7133 out.go:304] Setting ErrFile to fd 2...
	I0719 07:21:58.481104    7133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:21:58.481255    7133 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:21:58.481510    7133 mustload.go:65] Loading cluster: functional-971000
	I0719 07:21:58.481731    7133 config.go:182] Loaded profile config "functional-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:21:58.486086    7133 out.go:177] * The control-plane node functional-971000 host is not running: state=Stopped
	I0719 07:21:58.490149    7133 out.go:177]   To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-971000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-971000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-971000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 update-context --alsologtostderr -v=2: exit status 83 (41.629667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:21:58.438722    7131 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:21:58.438859    7131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:21:58.438862    7131 out.go:304] Setting ErrFile to fd 2...
	I0719 07:21:58.438864    7131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:21:58.439002    7131 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:21:58.439247    7131 mustload.go:65] Loading cluster: functional-971000
	I0719 07:21:58.439449    7131 config.go:182] Loaded profile config "functional-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:21:58.443122    7131 out.go:177] * The control-plane node functional-971000 host is not running: state=Stopped
	I0719 07:21:58.447106    7131 out.go:177]   To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-971000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-971000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-971000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-971000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-971000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.024708ms)

                                                
                                                
** stderr ** 
	error: context "functional-971000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-971000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 service list: exit status 83 (43.324375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-971000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-971000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-971000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 service list -o json: exit status 83 (47.856959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-971000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 service --namespace=default --https --url hello-node: exit status 83 (42.789958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-971000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 service hello-node --url --format={{.IP}}: exit status 83 (41.786291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-971000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-971000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-971000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 service hello-node --url: exit status 83 (42.859291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-971000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-971000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-971000"
functional_test.go:1565: failed to parse "* The control-plane node functional-971000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-971000\"": parse "* The control-plane node functional-971000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-971000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-971000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-971000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0719 07:21:21.069941    6923 out.go:291] Setting OutFile to fd 1 ...
I0719 07:21:21.070095    6923 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:21:21.070098    6923 out.go:304] Setting ErrFile to fd 2...
I0719 07:21:21.070103    6923 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:21:21.070214    6923 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
I0719 07:21:21.070455    6923 mustload.go:65] Loading cluster: functional-971000
I0719 07:21:21.070648    6923 config.go:182] Loaded profile config "functional-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 07:21:21.074747    6923 out.go:177] * The control-plane node functional-971000 host is not running: state=Stopped
I0719 07:21:21.086836    6923 out.go:177]   To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
stdout: * The control-plane node functional-971000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-971000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-971000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 6924: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-971000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-971000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-971000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-971000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-971000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-971000": client config: context "functional-971000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-971000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-971000 get svc nginx-svc: exit status 1 (68.541334ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-971000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-971000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 image load --daemon docker.io/kicbase/echo-server:functional-971000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-971000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 image load --daemon docker.io/kicbase/echo-server:functional-971000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-971000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-971000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 image load --daemon docker.io/kicbase/echo-server:functional-971000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-971000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 image save docker.io/kicbase/echo-server:functional-971000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-971000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.027602917s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 16 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (36.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (36.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-991000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-991000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.860728541s)

                                                
                                                
-- stdout --
	* [ha-991000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-991000" primary control-plane node in "ha-991000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-991000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:24:11.059484    7178 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:24:11.059632    7178 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:24:11.059635    7178 out.go:304] Setting ErrFile to fd 2...
	I0719 07:24:11.059638    7178 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:24:11.059762    7178 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:24:11.060813    7178 out.go:298] Setting JSON to false
	I0719 07:24:11.077193    7178 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5020,"bootTime":1721394031,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:24:11.077269    7178 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:24:11.081939    7178 out.go:177] * [ha-991000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:24:11.088969    7178 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:24:11.089013    7178 notify.go:220] Checking for updates...
	I0719 07:24:11.094886    7178 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:24:11.097960    7178 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:24:11.100860    7178 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:24:11.103922    7178 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:24:11.106899    7178 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:24:11.108358    7178 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:24:11.112867    7178 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:24:11.119800    7178 start.go:297] selected driver: qemu2
	I0719 07:24:11.119806    7178 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:24:11.119813    7178 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:24:11.122078    7178 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:24:11.124914    7178 out.go:177] * Automatically selected the socket_vmnet network
	I0719 07:24:11.127950    7178 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:24:11.127977    7178 cni.go:84] Creating CNI manager for ""
	I0719 07:24:11.127984    7178 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0719 07:24:11.127988    7178 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 07:24:11.128023    7178 start.go:340] cluster config:
	{Name:ha-991000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-991000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:24:11.131904    7178 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:24:11.139915    7178 out.go:177] * Starting "ha-991000" primary control-plane node in "ha-991000" cluster
	I0719 07:24:11.143980    7178 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:24:11.143996    7178 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:24:11.144008    7178 cache.go:56] Caching tarball of preloaded images
	I0719 07:24:11.144079    7178 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:24:11.144085    7178 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:24:11.144284    7178 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/ha-991000/config.json ...
	I0719 07:24:11.144296    7178 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/ha-991000/config.json: {Name:mk13640fd7dded57e50b687f27124fa48e16ee38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:24:11.144582    7178 start.go:360] acquireMachinesLock for ha-991000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:24:11.144615    7178 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "ha-991000"
	I0719 07:24:11.144625    7178 start.go:93] Provisioning new machine with config: &{Name:ha-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-991000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:24:11.144651    7178 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:24:11.151888    7178 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 07:24:11.169126    7178 start.go:159] libmachine.API.Create for "ha-991000" (driver="qemu2")
	I0719 07:24:11.169161    7178 client.go:168] LocalClient.Create starting
	I0719 07:24:11.169232    7178 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:24:11.169260    7178 main.go:141] libmachine: Decoding PEM data...
	I0719 07:24:11.169269    7178 main.go:141] libmachine: Parsing certificate...
	I0719 07:24:11.169305    7178 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:24:11.169327    7178 main.go:141] libmachine: Decoding PEM data...
	I0719 07:24:11.169340    7178 main.go:141] libmachine: Parsing certificate...
	I0719 07:24:11.169734    7178 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:24:11.303022    7178 main.go:141] libmachine: Creating SSH key...
	I0719 07:24:11.461459    7178 main.go:141] libmachine: Creating Disk image...
	I0719 07:24:11.461465    7178 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:24:11.461669    7178 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/disk.qcow2
	I0719 07:24:11.471363    7178 main.go:141] libmachine: STDOUT: 
	I0719 07:24:11.471382    7178 main.go:141] libmachine: STDERR: 
	I0719 07:24:11.471443    7178 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/disk.qcow2 +20000M
	I0719 07:24:11.479265    7178 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:24:11.479291    7178 main.go:141] libmachine: STDERR: 
	I0719 07:24:11.479307    7178 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/disk.qcow2
	I0719 07:24:11.479310    7178 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:24:11.479318    7178 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:24:11.479347    7178 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:ea:42:d7:c9:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/disk.qcow2
	I0719 07:24:11.480954    7178 main.go:141] libmachine: STDOUT: 
	I0719 07:24:11.480968    7178 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:24:11.480985    7178 client.go:171] duration metric: took 311.821375ms to LocalClient.Create
	I0719 07:24:13.483155    7178 start.go:128] duration metric: took 2.338499375s to createHost
	I0719 07:24:13.483221    7178 start.go:83] releasing machines lock for "ha-991000", held for 2.338613083s
	W0719 07:24:13.483264    7178 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:24:13.495475    7178 out.go:177] * Deleting "ha-991000" in qemu2 ...
	W0719 07:24:13.518328    7178 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:24:13.518392    7178 start.go:729] Will try again in 5 seconds ...
	I0719 07:24:18.520558    7178 start.go:360] acquireMachinesLock for ha-991000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:24:18.520975    7178 start.go:364] duration metric: took 329.958µs to acquireMachinesLock for "ha-991000"
	I0719 07:24:18.521082    7178 start.go:93] Provisioning new machine with config: &{Name:ha-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-991000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:24:18.521493    7178 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:24:18.530684    7178 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 07:24:18.579471    7178 start.go:159] libmachine.API.Create for "ha-991000" (driver="qemu2")
	I0719 07:24:18.579516    7178 client.go:168] LocalClient.Create starting
	I0719 07:24:18.579615    7178 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:24:18.579697    7178 main.go:141] libmachine: Decoding PEM data...
	I0719 07:24:18.579713    7178 main.go:141] libmachine: Parsing certificate...
	I0719 07:24:18.579770    7178 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:24:18.579816    7178 main.go:141] libmachine: Decoding PEM data...
	I0719 07:24:18.579830    7178 main.go:141] libmachine: Parsing certificate...
	I0719 07:24:18.580400    7178 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:24:18.709982    7178 main.go:141] libmachine: Creating SSH key...
	I0719 07:24:18.831792    7178 main.go:141] libmachine: Creating Disk image...
	I0719 07:24:18.831798    7178 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:24:18.831991    7178 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/disk.qcow2
	I0719 07:24:18.841108    7178 main.go:141] libmachine: STDOUT: 
	I0719 07:24:18.841129    7178 main.go:141] libmachine: STDERR: 
	I0719 07:24:18.841184    7178 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/disk.qcow2 +20000M
	I0719 07:24:18.849125    7178 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:24:18.849139    7178 main.go:141] libmachine: STDERR: 
	I0719 07:24:18.849155    7178 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/disk.qcow2
	I0719 07:24:18.849158    7178 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:24:18.849164    7178 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:24:18.849195    7178 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:e9:e0:08:0b:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/disk.qcow2
	I0719 07:24:18.850821    7178 main.go:141] libmachine: STDOUT: 
	I0719 07:24:18.850834    7178 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:24:18.850845    7178 client.go:171] duration metric: took 271.324958ms to LocalClient.Create
	I0719 07:24:20.852999    7178 start.go:128] duration metric: took 2.331491917s to createHost
	I0719 07:24:20.853063    7178 start.go:83] releasing machines lock for "ha-991000", held for 2.332080542s
	W0719 07:24:20.853361    7178 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-991000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:24:20.863017    7178 out.go:177] 
	W0719 07:24:20.867112    7178 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:24:20.867147    7178 out.go:239] * 
	* 
	W0719 07:24:20.869734    7178 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:24:20.877919    7178 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-991000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000: exit status 7 (66.315042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (104.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-991000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-991000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (60.622084ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-991000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-991000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-991000 -- rollout status deployment/busybox: exit status 1 (56.231875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-991000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.08925ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-991000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.299667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-991000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.0885ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-991000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.170792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-991000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.891458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-991000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.501375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-991000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.281125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-991000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.481416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-991000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.827708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-991000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.603667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-991000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.602667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-991000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.842458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-991000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-991000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-991000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.408875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-991000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-991000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-991000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.193041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-991000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-991000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-991000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.826834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-991000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000: exit status 7 (29.270542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (104.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-991000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.181ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-991000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000: exit status 7 (29.880417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-991000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-991000 -v=7 --alsologtostderr: exit status 83 (43.206541ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-991000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-991000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:26:05.736432    7260 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:26:05.737188    7260 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:05.737191    7260 out.go:304] Setting ErrFile to fd 2...
	I0719 07:26:05.737196    7260 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:05.737370    7260 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:26:05.737606    7260 mustload.go:65] Loading cluster: ha-991000
	I0719 07:26:05.737779    7260 config.go:182] Loaded profile config "ha-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:26:05.742280    7260 out.go:177] * The control-plane node ha-991000 host is not running: state=Stopped
	I0719 07:26:05.746216    7260 out.go:177]   To start a cluster, run: "minikube start -p ha-991000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-991000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000: exit status 7 (29.356708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-991000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-991000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.940875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-991000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-991000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-991000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000: exit status 7 (29.693666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-991000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-991000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-991000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-991000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-991000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-991000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-991000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-991000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000: exit status 7 (29.036792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-991000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-991000 status --output json -v=7 --alsologtostderr: exit status 7 (30.198458ms)

                                                
                                                
-- stdout --
	{"Name":"ha-991000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:26:05.942296    7272 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:26:05.942448    7272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:05.942451    7272 out.go:304] Setting ErrFile to fd 2...
	I0719 07:26:05.942453    7272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:05.942574    7272 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:26:05.942686    7272 out.go:298] Setting JSON to true
	I0719 07:26:05.942696    7272 mustload.go:65] Loading cluster: ha-991000
	I0719 07:26:05.942754    7272 notify.go:220] Checking for updates...
	I0719 07:26:05.942910    7272 config.go:182] Loaded profile config "ha-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:26:05.942916    7272 status.go:255] checking status of ha-991000 ...
	I0719 07:26:05.943137    7272 status.go:330] ha-991000 host status = "Stopped" (err=<nil>)
	I0719 07:26:05.943142    7272 status.go:343] host is not running, skipping remaining checks
	I0719 07:26:05.943144    7272 status.go:257] ha-991000 status: &{Name:ha-991000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-991000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000: exit status 7 (29.922291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-991000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-991000 node stop m02 -v=7 --alsologtostderr: exit status 85 (46.573417ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:26:06.002316    7276 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:26:06.002910    7276 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:06.002916    7276 out.go:304] Setting ErrFile to fd 2...
	I0719 07:26:06.002918    7276 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:06.003102    7276 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:26:06.003329    7276 mustload.go:65] Loading cluster: ha-991000
	I0719 07:26:06.003530    7276 config.go:182] Loaded profile config "ha-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:26:06.007458    7276 out.go:177] 
	W0719 07:26:06.010372    7276 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0719 07:26:06.010382    7276 out.go:239] * 
	* 
	W0719 07:26:06.012350    7276 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:26:06.016396    7276 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-991000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr: exit status 7 (29.356917ms)

                                                
                                                
-- stdout --
	ha-991000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:26:06.049006    7278 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:26:06.049146    7278 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:06.049149    7278 out.go:304] Setting ErrFile to fd 2...
	I0719 07:26:06.049151    7278 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:06.049283    7278 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:26:06.049406    7278 out.go:298] Setting JSON to false
	I0719 07:26:06.049415    7278 mustload.go:65] Loading cluster: ha-991000
	I0719 07:26:06.049478    7278 notify.go:220] Checking for updates...
	I0719 07:26:06.049620    7278 config.go:182] Loaded profile config "ha-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:26:06.049637    7278 status.go:255] checking status of ha-991000 ...
	I0719 07:26:06.049858    7278 status.go:330] ha-991000 host status = "Stopped" (err=<nil>)
	I0719 07:26:06.049861    7278 status.go:343] host is not running, skipping remaining checks
	I0719 07:26:06.049864    7278 status.go:257] ha-991000 status: &{Name:ha-991000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr": ha-991000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr": ha-991000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr": ha-991000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr": ha-991000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000: exit status 7 (29.9285ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-991000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-991000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-991000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-991000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000: exit status 7 (29.862208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (46.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-991000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-991000 node start m02 -v=7 --alsologtostderr: exit status 85 (47.7705ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:26:06.186150    7287 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:26:06.186635    7287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:06.186639    7287 out.go:304] Setting ErrFile to fd 2...
	I0719 07:26:06.186641    7287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:06.186770    7287 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:26:06.186960    7287 mustload.go:65] Loading cluster: ha-991000
	I0719 07:26:06.187131    7287 config.go:182] Loaded profile config "ha-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:26:06.190504    7287 out.go:177] 
	W0719 07:26:06.194227    7287 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0719 07:26:06.194230    7287 out.go:239] * 
	* 
	W0719 07:26:06.196133    7287 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:26:06.200287    7287 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0719 07:26:06.186150    7287 out.go:291] Setting OutFile to fd 1 ...
I0719 07:26:06.186635    7287 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:26:06.186639    7287 out.go:304] Setting ErrFile to fd 2...
I0719 07:26:06.186641    7287 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:26:06.186770    7287 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
I0719 07:26:06.186960    7287 mustload.go:65] Loading cluster: ha-991000
I0719 07:26:06.187131    7287 config.go:182] Loaded profile config "ha-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 07:26:06.190504    7287 out.go:177] 
W0719 07:26:06.194227    7287 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0719 07:26:06.194230    7287 out.go:239] * 
* 
W0719 07:26:06.196133    7287 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0719 07:26:06.200287    7287 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-991000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr: exit status 7 (30.035375ms)

                                                
                                                
-- stdout --
	ha-991000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:26:06.233747    7289 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:26:06.233893    7289 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:06.233896    7289 out.go:304] Setting ErrFile to fd 2...
	I0719 07:26:06.233898    7289 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:06.234040    7289 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:26:06.234164    7289 out.go:298] Setting JSON to false
	I0719 07:26:06.234173    7289 mustload.go:65] Loading cluster: ha-991000
	I0719 07:26:06.234236    7289 notify.go:220] Checking for updates...
	I0719 07:26:06.234392    7289 config.go:182] Loaded profile config "ha-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:26:06.234398    7289 status.go:255] checking status of ha-991000 ...
	I0719 07:26:06.234595    7289 status.go:330] ha-991000 host status = "Stopped" (err=<nil>)
	I0719 07:26:06.234598    7289 status.go:343] host is not running, skipping remaining checks
	I0719 07:26:06.234600    7289 status.go:257] ha-991000 status: &{Name:ha-991000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr: exit status 7 (73.814625ms)

                                                
                                                
-- stdout --
	ha-991000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:26:07.038168    7291 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:26:07.038385    7291 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:07.038389    7291 out.go:304] Setting ErrFile to fd 2...
	I0719 07:26:07.038392    7291 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:07.038567    7291 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:26:07.038724    7291 out.go:298] Setting JSON to false
	I0719 07:26:07.038737    7291 mustload.go:65] Loading cluster: ha-991000
	I0719 07:26:07.038770    7291 notify.go:220] Checking for updates...
	I0719 07:26:07.039016    7291 config.go:182] Loaded profile config "ha-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:26:07.039024    7291 status.go:255] checking status of ha-991000 ...
	I0719 07:26:07.039313    7291 status.go:330] ha-991000 host status = "Stopped" (err=<nil>)
	I0719 07:26:07.039318    7291 status.go:343] host is not running, skipping remaining checks
	I0719 07:26:07.039321    7291 status.go:257] ha-991000 status: &{Name:ha-991000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr: exit status 7 (73.166292ms)

                                                
                                                
-- stdout --
	ha-991000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:26:08.372468    7293 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:26:08.372688    7293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:08.372692    7293 out.go:304] Setting ErrFile to fd 2...
	I0719 07:26:08.372695    7293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:08.372878    7293 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:26:08.373045    7293 out.go:298] Setting JSON to false
	I0719 07:26:08.373056    7293 mustload.go:65] Loading cluster: ha-991000
	I0719 07:26:08.373090    7293 notify.go:220] Checking for updates...
	I0719 07:26:08.373288    7293 config.go:182] Loaded profile config "ha-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:26:08.373296    7293 status.go:255] checking status of ha-991000 ...
	I0719 07:26:08.373583    7293 status.go:330] ha-991000 host status = "Stopped" (err=<nil>)
	I0719 07:26:08.373588    7293 status.go:343] host is not running, skipping remaining checks
	I0719 07:26:08.373591    7293 status.go:257] ha-991000 status: &{Name:ha-991000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr: exit status 7 (73.427541ms)

                                                
                                                
-- stdout --
	ha-991000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:26:10.939650    7295 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:26:10.939870    7295 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:10.939874    7295 out.go:304] Setting ErrFile to fd 2...
	I0719 07:26:10.939877    7295 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:10.940051    7295 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:26:10.940214    7295 out.go:298] Setting JSON to false
	I0719 07:26:10.940227    7295 mustload.go:65] Loading cluster: ha-991000
	I0719 07:26:10.940267    7295 notify.go:220] Checking for updates...
	I0719 07:26:10.940475    7295 config.go:182] Loaded profile config "ha-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:26:10.940483    7295 status.go:255] checking status of ha-991000 ...
	I0719 07:26:10.940769    7295 status.go:330] ha-991000 host status = "Stopped" (err=<nil>)
	I0719 07:26:10.940774    7295 status.go:343] host is not running, skipping remaining checks
	I0719 07:26:10.940777    7295 status.go:257] ha-991000 status: &{Name:ha-991000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr: exit status 7 (72.589583ms)

                                                
                                                
-- stdout --
	ha-991000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:26:15.592543    7297 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:26:15.592760    7297 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:15.592765    7297 out.go:304] Setting ErrFile to fd 2...
	I0719 07:26:15.592769    7297 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:15.592967    7297 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:26:15.593142    7297 out.go:298] Setting JSON to false
	I0719 07:26:15.593156    7297 mustload.go:65] Loading cluster: ha-991000
	I0719 07:26:15.593202    7297 notify.go:220] Checking for updates...
	I0719 07:26:15.593425    7297 config.go:182] Loaded profile config "ha-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:26:15.593434    7297 status.go:255] checking status of ha-991000 ...
	I0719 07:26:15.593719    7297 status.go:330] ha-991000 host status = "Stopped" (err=<nil>)
	I0719 07:26:15.593724    7297 status.go:343] host is not running, skipping remaining checks
	I0719 07:26:15.593728    7297 status.go:257] ha-991000 status: &{Name:ha-991000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr: exit status 7 (71.898583ms)

                                                
                                                
-- stdout --
	ha-991000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:26:19.836003    7301 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:26:19.836191    7301 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:19.836195    7301 out.go:304] Setting ErrFile to fd 2...
	I0719 07:26:19.836198    7301 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:19.836364    7301 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:26:19.836508    7301 out.go:298] Setting JSON to false
	I0719 07:26:19.836521    7301 mustload.go:65] Loading cluster: ha-991000
	I0719 07:26:19.836561    7301 notify.go:220] Checking for updates...
	I0719 07:26:19.836797    7301 config.go:182] Loaded profile config "ha-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:26:19.836805    7301 status.go:255] checking status of ha-991000 ...
	I0719 07:26:19.837079    7301 status.go:330] ha-991000 host status = "Stopped" (err=<nil>)
	I0719 07:26:19.837083    7301 status.go:343] host is not running, skipping remaining checks
	I0719 07:26:19.837087    7301 status.go:257] ha-991000 status: &{Name:ha-991000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr: exit status 7 (73.720042ms)

                                                
                                                
-- stdout --
	ha-991000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:26:24.696098    7303 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:26:24.696308    7303 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:24.696312    7303 out.go:304] Setting ErrFile to fd 2...
	I0719 07:26:24.696315    7303 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:24.696486    7303 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:26:24.696654    7303 out.go:298] Setting JSON to false
	I0719 07:26:24.696670    7303 mustload.go:65] Loading cluster: ha-991000
	I0719 07:26:24.696713    7303 notify.go:220] Checking for updates...
	I0719 07:26:24.696951    7303 config.go:182] Loaded profile config "ha-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:26:24.696960    7303 status.go:255] checking status of ha-991000 ...
	I0719 07:26:24.697216    7303 status.go:330] ha-991000 host status = "Stopped" (err=<nil>)
	I0719 07:26:24.697221    7303 status.go:343] host is not running, skipping remaining checks
	I0719 07:26:24.697224    7303 status.go:257] ha-991000 status: &{Name:ha-991000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr: exit status 7 (72.323375ms)

                                                
                                                
-- stdout --
	ha-991000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:26:41.102293    7305 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:26:41.102496    7305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:41.102500    7305 out.go:304] Setting ErrFile to fd 2...
	I0719 07:26:41.102503    7305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:41.102677    7305 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:26:41.102846    7305 out.go:298] Setting JSON to false
	I0719 07:26:41.102862    7305 mustload.go:65] Loading cluster: ha-991000
	I0719 07:26:41.102928    7305 notify.go:220] Checking for updates...
	I0719 07:26:41.103110    7305 config.go:182] Loaded profile config "ha-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:26:41.103118    7305 status.go:255] checking status of ha-991000 ...
	I0719 07:26:41.103409    7305 status.go:330] ha-991000 host status = "Stopped" (err=<nil>)
	I0719 07:26:41.103414    7305 status.go:343] host is not running, skipping remaining checks
	I0719 07:26:41.103417    7305 status.go:257] ha-991000 status: &{Name:ha-991000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr: exit status 7 (73.921125ms)

                                                
                                                
-- stdout --
	ha-991000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:26:52.633551    7307 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:26:52.633793    7307 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:52.633798    7307 out.go:304] Setting ErrFile to fd 2...
	I0719 07:26:52.633802    7307 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:52.633995    7307 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:26:52.634176    7307 out.go:298] Setting JSON to false
	I0719 07:26:52.634191    7307 mustload.go:65] Loading cluster: ha-991000
	I0719 07:26:52.634226    7307 notify.go:220] Checking for updates...
	I0719 07:26:52.634504    7307 config.go:182] Loaded profile config "ha-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:26:52.634514    7307 status.go:255] checking status of ha-991000 ...
	I0719 07:26:52.634822    7307 status.go:330] ha-991000 host status = "Stopped" (err=<nil>)
	I0719 07:26:52.634827    7307 status.go:343] host is not running, skipping remaining checks
	I0719 07:26:52.634831    7307 status.go:257] ha-991000 status: &{Name:ha-991000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000: exit status 7 (34.145625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (46.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-991000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-991000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-991000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-991000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-991000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-991000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-991000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-991000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000: exit status 7 (30.065416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-991000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-991000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-991000 -v=7 --alsologtostderr: (2.676041125s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-991000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-991000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.226173833s)

                                                
                                                
-- stdout --
	* [ha-991000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-991000" primary control-plane node in "ha-991000" cluster
	* Restarting existing qemu2 VM for "ha-991000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-991000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:26:55.514733    7336 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:26:55.514894    7336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:55.514898    7336 out.go:304] Setting ErrFile to fd 2...
	I0719 07:26:55.514901    7336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:26:55.515076    7336 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:26:55.516348    7336 out.go:298] Setting JSON to false
	I0719 07:26:55.535967    7336 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5184,"bootTime":1721394031,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:26:55.536037    7336 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:26:55.541362    7336 out.go:177] * [ha-991000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:26:55.547269    7336 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:26:55.547297    7336 notify.go:220] Checking for updates...
	I0719 07:26:55.554281    7336 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:26:55.557290    7336 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:26:55.560331    7336 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:26:55.563297    7336 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:26:55.566294    7336 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:26:55.569584    7336 config.go:182] Loaded profile config "ha-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:26:55.569647    7336 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:26:55.574337    7336 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 07:26:55.581237    7336 start.go:297] selected driver: qemu2
	I0719 07:26:55.581243    7336 start.go:901] validating driver "qemu2" against &{Name:ha-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-991000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:26:55.581296    7336 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:26:55.583768    7336 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:26:55.583792    7336 cni.go:84] Creating CNI manager for ""
	I0719 07:26:55.583798    7336 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 07:26:55.583863    7336 start.go:340] cluster config:
	{Name:ha-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-991000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:26:55.587547    7336 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:26:55.595260    7336 out.go:177] * Starting "ha-991000" primary control-plane node in "ha-991000" cluster
	I0719 07:26:55.599237    7336 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:26:55.599254    7336 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:26:55.599271    7336 cache.go:56] Caching tarball of preloaded images
	I0719 07:26:55.599337    7336 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:26:55.599343    7336 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:26:55.599412    7336 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/ha-991000/config.json ...
	I0719 07:26:55.599745    7336 start.go:360] acquireMachinesLock for ha-991000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:26:55.599783    7336 start.go:364] duration metric: took 31.083µs to acquireMachinesLock for "ha-991000"
	I0719 07:26:55.599792    7336 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:26:55.599797    7336 fix.go:54] fixHost starting: 
	I0719 07:26:55.599918    7336 fix.go:112] recreateIfNeeded on ha-991000: state=Stopped err=<nil>
	W0719 07:26:55.599926    7336 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:26:55.604256    7336 out.go:177] * Restarting existing qemu2 VM for "ha-991000" ...
	I0719 07:26:55.612284    7336 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:26:55.612325    7336 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:e9:e0:08:0b:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/disk.qcow2
	I0719 07:26:55.614403    7336 main.go:141] libmachine: STDOUT: 
	I0719 07:26:55.614428    7336 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:26:55.614459    7336 fix.go:56] duration metric: took 14.660584ms for fixHost
	I0719 07:26:55.614469    7336 start.go:83] releasing machines lock for "ha-991000", held for 14.676709ms
	W0719 07:26:55.614475    7336 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:26:55.614509    7336 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:26:55.614514    7336 start.go:729] Will try again in 5 seconds ...
	I0719 07:27:00.616677    7336 start.go:360] acquireMachinesLock for ha-991000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:27:00.617066    7336 start.go:364] duration metric: took 310.084µs to acquireMachinesLock for "ha-991000"
	I0719 07:27:00.617187    7336 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:27:00.617208    7336 fix.go:54] fixHost starting: 
	I0719 07:27:00.617955    7336 fix.go:112] recreateIfNeeded on ha-991000: state=Stopped err=<nil>
	W0719 07:27:00.617987    7336 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:27:00.629761    7336 out.go:177] * Restarting existing qemu2 VM for "ha-991000" ...
	I0719 07:27:00.634300    7336 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:27:00.634537    7336 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:e9:e0:08:0b:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/disk.qcow2
	I0719 07:27:00.643970    7336 main.go:141] libmachine: STDOUT: 
	I0719 07:27:00.644045    7336 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:27:00.644130    7336 fix.go:56] duration metric: took 26.920542ms for fixHost
	I0719 07:27:00.644155    7336 start.go:83] releasing machines lock for "ha-991000", held for 27.062875ms
	W0719 07:27:00.644342    7336 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-991000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-991000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:27:00.652428    7336 out.go:177] 
	W0719 07:27:00.656425    7336 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:27:00.656460    7336 out.go:239] * 
	* 
	W0719 07:27:00.659165    7336 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:27:00.666350    7336 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-991000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-991000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000: exit status 7 (33.103125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (8.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-991000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-991000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.307125ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-991000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-991000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:27:00.810970    7348 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:27:00.811394    7348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:27:00.811398    7348 out.go:304] Setting ErrFile to fd 2...
	I0719 07:27:00.811401    7348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:27:00.811536    7348 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:27:00.811759    7348 mustload.go:65] Loading cluster: ha-991000
	I0719 07:27:00.811946    7348 config.go:182] Loaded profile config "ha-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:27:00.816394    7348 out.go:177] * The control-plane node ha-991000 host is not running: state=Stopped
	I0719 07:27:00.819351    7348 out.go:177]   To start a cluster, run: "minikube start -p ha-991000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-991000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr: exit status 7 (29.075125ms)

                                                
                                                
-- stdout --
	ha-991000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:27:00.850610    7350 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:27:00.850765    7350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:27:00.850771    7350 out.go:304] Setting ErrFile to fd 2...
	I0719 07:27:00.850774    7350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:27:00.850894    7350 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:27:00.851016    7350 out.go:298] Setting JSON to false
	I0719 07:27:00.851026    7350 mustload.go:65] Loading cluster: ha-991000
	I0719 07:27:00.851085    7350 notify.go:220] Checking for updates...
	I0719 07:27:00.851239    7350 config.go:182] Loaded profile config "ha-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:27:00.851245    7350 status.go:255] checking status of ha-991000 ...
	I0719 07:27:00.851448    7350 status.go:330] ha-991000 host status = "Stopped" (err=<nil>)
	I0719 07:27:00.851452    7350 status.go:343] host is not running, skipping remaining checks
	I0719 07:27:00.851455    7350 status.go:257] ha-991000 status: &{Name:ha-991000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000: exit status 7 (29.808833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-991000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-991000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-991000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-991000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000: exit status 7 (29.966042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (2.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-991000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-991000 stop -v=7 --alsologtostderr: (2.077380167s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr: exit status 7 (68.127959ms)

                                                
                                                
-- stdout --
	ha-991000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:27:03.101881    7371 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:27:03.102091    7371 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:27:03.102095    7371 out.go:304] Setting ErrFile to fd 2...
	I0719 07:27:03.102098    7371 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:27:03.102262    7371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:27:03.102435    7371 out.go:298] Setting JSON to false
	I0719 07:27:03.102447    7371 mustload.go:65] Loading cluster: ha-991000
	I0719 07:27:03.102491    7371 notify.go:220] Checking for updates...
	I0719 07:27:03.102687    7371 config.go:182] Loaded profile config "ha-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:27:03.102695    7371 status.go:255] checking status of ha-991000 ...
	I0719 07:27:03.102970    7371 status.go:330] ha-991000 host status = "Stopped" (err=<nil>)
	I0719 07:27:03.102975    7371 status.go:343] host is not running, skipping remaining checks
	I0719 07:27:03.102978    7371 status.go:257] ha-991000 status: &{Name:ha-991000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr": ha-991000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr": ha-991000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-991000 status -v=7 --alsologtostderr": ha-991000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000: exit status 7 (32.813959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (2.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-991000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-991000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.178984458s)

                                                
                                                
-- stdout --
	* [ha-991000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-991000" primary control-plane node in "ha-991000" cluster
	* Restarting existing qemu2 VM for "ha-991000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-991000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:27:03.165281    7375 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:27:03.165429    7375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:27:03.165432    7375 out.go:304] Setting ErrFile to fd 2...
	I0719 07:27:03.165435    7375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:27:03.165550    7375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:27:03.166594    7375 out.go:298] Setting JSON to false
	I0719 07:27:03.182709    7375 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5192,"bootTime":1721394031,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:27:03.182779    7375 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:27:03.188064    7375 out.go:177] * [ha-991000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:27:03.195021    7375 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:27:03.195056    7375 notify.go:220] Checking for updates...
	I0719 07:27:03.201992    7375 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:27:03.204913    7375 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:27:03.207938    7375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:27:03.210995    7375 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:27:03.213958    7375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:27:03.217239    7375 config.go:182] Loaded profile config "ha-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:27:03.217503    7375 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:27:03.221939    7375 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 07:27:03.228930    7375 start.go:297] selected driver: qemu2
	I0719 07:27:03.228940    7375 start.go:901] validating driver "qemu2" against &{Name:ha-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-991000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:27:03.229026    7375 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:27:03.231270    7375 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:27:03.231309    7375 cni.go:84] Creating CNI manager for ""
	I0719 07:27:03.231315    7375 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 07:27:03.231382    7375 start.go:340] cluster config:
	{Name:ha-991000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-991000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:27:03.234995    7375 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:27:03.241899    7375 out.go:177] * Starting "ha-991000" primary control-plane node in "ha-991000" cluster
	I0719 07:27:03.245902    7375 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:27:03.245919    7375 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:27:03.245936    7375 cache.go:56] Caching tarball of preloaded images
	I0719 07:27:03.245996    7375 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:27:03.246002    7375 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:27:03.246067    7375 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/ha-991000/config.json ...
	I0719 07:27:03.246473    7375 start.go:360] acquireMachinesLock for ha-991000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:27:03.246503    7375 start.go:364] duration metric: took 23.708µs to acquireMachinesLock for "ha-991000"
	I0719 07:27:03.246512    7375 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:27:03.246518    7375 fix.go:54] fixHost starting: 
	I0719 07:27:03.246638    7375 fix.go:112] recreateIfNeeded on ha-991000: state=Stopped err=<nil>
	W0719 07:27:03.246647    7375 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:27:03.255001    7375 out.go:177] * Restarting existing qemu2 VM for "ha-991000" ...
	I0719 07:27:03.258944    7375 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:27:03.258982    7375 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:e9:e0:08:0b:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/disk.qcow2
	I0719 07:27:03.261055    7375 main.go:141] libmachine: STDOUT: 
	I0719 07:27:03.261075    7375 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:27:03.261104    7375 fix.go:56] duration metric: took 14.586209ms for fixHost
	I0719 07:27:03.261108    7375 start.go:83] releasing machines lock for "ha-991000", held for 14.601125ms
	W0719 07:27:03.261114    7375 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:27:03.261150    7375 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:27:03.261155    7375 start.go:729] Will try again in 5 seconds ...
	I0719 07:27:08.261976    7375 start.go:360] acquireMachinesLock for ha-991000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:27:08.262495    7375 start.go:364] duration metric: took 375.209µs to acquireMachinesLock for "ha-991000"
	I0719 07:27:08.262654    7375 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:27:08.262676    7375 fix.go:54] fixHost starting: 
	I0719 07:27:08.263502    7375 fix.go:112] recreateIfNeeded on ha-991000: state=Stopped err=<nil>
	W0719 07:27:08.263529    7375 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:27:08.268070    7375 out.go:177] * Restarting existing qemu2 VM for "ha-991000" ...
	I0719 07:27:08.271960    7375 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:27:08.272252    7375 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:e9:e0:08:0b:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/ha-991000/disk.qcow2
	I0719 07:27:08.281633    7375 main.go:141] libmachine: STDOUT: 
	I0719 07:27:08.281691    7375 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:27:08.281772    7375 fix.go:56] duration metric: took 19.099292ms for fixHost
	I0719 07:27:08.281789    7375 start.go:83] releasing machines lock for "ha-991000", held for 19.271ms
	W0719 07:27:08.281953    7375 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-991000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-991000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:27:08.288916    7375 out.go:177] 
	W0719 07:27:08.292957    7375 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:27:08.292978    7375 out.go:239] * 
	* 
	W0719 07:27:08.295673    7375 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:27:08.303932    7375 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-991000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000: exit status 7 (67.321459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-991000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-991000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-991000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-991000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000: exit status 7 (29.791666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-991000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-991000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.468792ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-991000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-991000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:27:08.492400    7390 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:27:08.492560    7390 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:27:08.492563    7390 out.go:304] Setting ErrFile to fd 2...
	I0719 07:27:08.492565    7390 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:27:08.492714    7390 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:27:08.492946    7390 mustload.go:65] Loading cluster: ha-991000
	I0719 07:27:08.493121    7390 config.go:182] Loaded profile config "ha-991000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:27:08.497852    7390 out.go:177] * The control-plane node ha-991000 host is not running: state=Stopped
	I0719 07:27:08.501829    7390 out.go:177]   To start a cluster, run: "minikube start -p ha-991000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-991000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000: exit status 7 (29.190875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-991000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-991000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-991000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-991000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-991000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-991000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-991000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-991000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-991000 -n ha-991000: exit status 7 (29.286958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-991000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-821000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-821000 --driver=qemu2 : exit status 80 (9.818007s)

                                                
                                                
-- stdout --
	* [image-821000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-821000" primary control-plane node in "image-821000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-821000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-821000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-821000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-821000 -n image-821000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-821000 -n image-821000: exit status 7 (68.41ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-821000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.92s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-120000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-120000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.920048458s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"84e10b56-24ac-46fd-ac5a-324047a3d22d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-120000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"33948ba0-e4cc-4c93-9647-aaf8a02023bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19302"}}
	{"specversion":"1.0","id":"61ca827a-453f-4cc6-8f5e-fc36d7840128","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig"}}
	{"specversion":"1.0","id":"0031d2df-218e-4e73-aba1-a7959d526ac8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b9566880-1e1d-4e51-b15e-c21b7fce4346","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1958a347-6673-46bd-9b9f-5a6007bc3b93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube"}}
	{"specversion":"1.0","id":"e389c467-98e6-40bd-a4c5-7a4cab68a62d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4f73390c-3928-4de3-8ba6-cdbf460aa8ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d16e9073-a79e-4bd9-a5b7-976197780b43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"6ee97f55-fba0-470c-91dc-65453ed41094","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-120000\" primary control-plane node in \"json-output-120000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c4837672-06ca-4562-ab32-459befb71f9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"e5dddc57-4aa6-4a3d-a0c5-bdf65438be78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-120000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"2d7a8de7-dbd2-49a0-a13f-fed1dd9dd794","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"89c59fc4-4e44-421a-bc6f-85007ae6e780","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"09c65551-1686-469d-878d-c2a8086bc12b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-120000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"f5107615-5b5a-4cfa-8c43-d6fb0b7fb358","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"df0bc9a6-2733-44ea-a3cf-04d5bbd10ef8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-120000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.92s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-120000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-120000 --output=json --user=testUser: exit status 83 (79.572917ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"867bc8bc-aee8-4399-acc6-c8c6e4e5d411","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-120000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"8b1468c2-f0da-4d63-b856-944ccf05c995","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-120000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-120000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-120000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-120000 --output=json --user=testUser: exit status 83 (44.348042ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-120000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-120000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-120000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-120000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.04s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-697000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-697000 --driver=qemu2 : exit status 80 (9.749333541s)

                                                
                                                
-- stdout --
	* [first-697000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-697000" primary control-plane node in "first-697000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-697000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-697000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-697000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-19 07:27:40.794728 -0700 PDT m=+490.068849459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-699000 -n second-699000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-699000 -n second-699000: exit status 85 (77.297667ms)

                                                
                                                
-- stdout --
	* Profile "second-699000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-699000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-699000" host is not running, skipping log retrieval (state="* Profile \"second-699000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-699000\"")
helpers_test.go:175: Cleaning up "second-699000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-699000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-19 07:27:40.981174 -0700 PDT m=+490.255296626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-697000 -n first-697000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-697000 -n first-697000: exit status 7 (29.12975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-697000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-697000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-697000
--- FAIL: TestMinikubeProfile (10.04s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-182000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-182000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.891677792s)

                                                
                                                
-- stdout --
	* [mount-start-1-182000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-182000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-182000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-182000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-182000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-182000 -n mount-start-1-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-182000 -n mount-start-1-182000: exit status 7 (66.6045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.96s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-023000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-023000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.761351792s)

                                                
                                                
-- stdout --
	* [multinode-023000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-023000" primary control-plane node in "multinode-023000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-023000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:27:51.242674    7528 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:27:51.242794    7528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:27:51.242797    7528 out.go:304] Setting ErrFile to fd 2...
	I0719 07:27:51.242800    7528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:27:51.242958    7528 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:27:51.243962    7528 out.go:298] Setting JSON to false
	I0719 07:27:51.260217    7528 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5240,"bootTime":1721394031,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:27:51.260280    7528 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:27:51.266225    7528 out.go:177] * [multinode-023000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:27:51.272100    7528 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:27:51.272138    7528 notify.go:220] Checking for updates...
	I0719 07:27:51.279212    7528 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:27:51.282125    7528 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:27:51.285173    7528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:27:51.288203    7528 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:27:51.289575    7528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:27:51.292367    7528 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:27:51.296182    7528 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:27:51.309178    7528 start.go:297] selected driver: qemu2
	I0719 07:27:51.309184    7528 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:27:51.309190    7528 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:27:51.311424    7528 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:27:51.314199    7528 out.go:177] * Automatically selected the socket_vmnet network
	I0719 07:27:51.317300    7528 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:27:51.317329    7528 cni.go:84] Creating CNI manager for ""
	I0719 07:27:51.317334    7528 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0719 07:27:51.317340    7528 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 07:27:51.317372    7528 start.go:340] cluster config:
	{Name:multinode-023000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-023000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:27:51.321097    7528 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:27:51.328183    7528 out.go:177] * Starting "multinode-023000" primary control-plane node in "multinode-023000" cluster
	I0719 07:27:51.332105    7528 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:27:51.332121    7528 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:27:51.332133    7528 cache.go:56] Caching tarball of preloaded images
	I0719 07:27:51.332195    7528 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:27:51.332200    7528 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:27:51.332423    7528 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/multinode-023000/config.json ...
	I0719 07:27:51.332438    7528 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/multinode-023000/config.json: {Name:mk9ffe15feaa5c3f818d6fe3bae3524a315f63c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:27:51.332649    7528 start.go:360] acquireMachinesLock for multinode-023000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:27:51.332682    7528 start.go:364] duration metric: took 27.292µs to acquireMachinesLock for "multinode-023000"
	I0719 07:27:51.332692    7528 start.go:93] Provisioning new machine with config: &{Name:multinode-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-023000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:27:51.332725    7528 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:27:51.340140    7528 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 07:27:51.357303    7528 start.go:159] libmachine.API.Create for "multinode-023000" (driver="qemu2")
	I0719 07:27:51.357335    7528 client.go:168] LocalClient.Create starting
	I0719 07:27:51.357405    7528 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:27:51.357435    7528 main.go:141] libmachine: Decoding PEM data...
	I0719 07:27:51.357445    7528 main.go:141] libmachine: Parsing certificate...
	I0719 07:27:51.357483    7528 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:27:51.357506    7528 main.go:141] libmachine: Decoding PEM data...
	I0719 07:27:51.357512    7528 main.go:141] libmachine: Parsing certificate...
	I0719 07:27:51.357912    7528 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:27:51.474625    7528 main.go:141] libmachine: Creating SSH key...
	I0719 07:27:51.561223    7528 main.go:141] libmachine: Creating Disk image...
	I0719 07:27:51.561228    7528 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:27:51.561418    7528 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/disk.qcow2
	I0719 07:27:51.570518    7528 main.go:141] libmachine: STDOUT: 
	I0719 07:27:51.570539    7528 main.go:141] libmachine: STDERR: 
	I0719 07:27:51.570584    7528 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/disk.qcow2 +20000M
	I0719 07:27:51.578330    7528 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:27:51.578357    7528 main.go:141] libmachine: STDERR: 
	I0719 07:27:51.578377    7528 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/disk.qcow2
	I0719 07:27:51.578382    7528 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:27:51.578390    7528 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:27:51.578421    7528 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:92:eb:00:55:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/disk.qcow2
	I0719 07:27:51.580019    7528 main.go:141] libmachine: STDOUT: 
	I0719 07:27:51.580032    7528 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:27:51.580050    7528 client.go:171] duration metric: took 222.712625ms to LocalClient.Create
	I0719 07:27:53.582211    7528 start.go:128] duration metric: took 2.249475125s to createHost
	I0719 07:27:53.582270    7528 start.go:83] releasing machines lock for "multinode-023000", held for 2.249592833s
	W0719 07:27:53.582361    7528 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:27:53.594524    7528 out.go:177] * Deleting "multinode-023000" in qemu2 ...
	W0719 07:27:53.616005    7528 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:27:53.616038    7528 start.go:729] Will try again in 5 seconds ...
	I0719 07:27:58.618153    7528 start.go:360] acquireMachinesLock for multinode-023000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:27:58.618636    7528 start.go:364] duration metric: took 385.166µs to acquireMachinesLock for "multinode-023000"
	I0719 07:27:58.618783    7528 start.go:93] Provisioning new machine with config: &{Name:multinode-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-023000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:27:58.619053    7528 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:27:58.625687    7528 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 07:27:58.672926    7528 start.go:159] libmachine.API.Create for "multinode-023000" (driver="qemu2")
	I0719 07:27:58.673087    7528 client.go:168] LocalClient.Create starting
	I0719 07:27:58.673219    7528 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:27:58.673286    7528 main.go:141] libmachine: Decoding PEM data...
	I0719 07:27:58.673310    7528 main.go:141] libmachine: Parsing certificate...
	I0719 07:27:58.673388    7528 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:27:58.673441    7528 main.go:141] libmachine: Decoding PEM data...
	I0719 07:27:58.673455    7528 main.go:141] libmachine: Parsing certificate...
	I0719 07:27:58.673950    7528 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:27:58.804765    7528 main.go:141] libmachine: Creating SSH key...
	I0719 07:27:58.912452    7528 main.go:141] libmachine: Creating Disk image...
	I0719 07:27:58.912462    7528 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:27:58.912640    7528 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/disk.qcow2
	I0719 07:27:58.922007    7528 main.go:141] libmachine: STDOUT: 
	I0719 07:27:58.922033    7528 main.go:141] libmachine: STDERR: 
	I0719 07:27:58.922077    7528 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/disk.qcow2 +20000M
	I0719 07:27:58.929922    7528 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:27:58.929945    7528 main.go:141] libmachine: STDERR: 
	I0719 07:27:58.929953    7528 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/disk.qcow2
	I0719 07:27:58.929958    7528 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:27:58.929971    7528 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:27:58.930003    7528 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:75:af:37:06:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/disk.qcow2
	I0719 07:27:58.931622    7528 main.go:141] libmachine: STDOUT: 
	I0719 07:27:58.931639    7528 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:27:58.931658    7528 client.go:171] duration metric: took 258.565ms to LocalClient.Create
	I0719 07:28:00.933824    7528 start.go:128] duration metric: took 2.314756875s to createHost
	I0719 07:28:00.933879    7528 start.go:83] releasing machines lock for "multinode-023000", held for 2.315223416s
	W0719 07:28:00.934232    7528 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-023000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-023000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:28:00.944830    7528 out.go:177] 
	W0719 07:28:00.948907    7528 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:28:00.948964    7528 out.go:239] * 
	* 
	W0719 07:28:00.951569    7528 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:28:00.960659    7528 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-023000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000: exit status 7 (67.019583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (97.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-023000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-023000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (59.290084ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-023000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-023000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-023000 -- rollout status deployment/busybox: exit status 1 (55.766375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-023000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.695916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-023000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.596292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-023000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.49175ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-023000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.518584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-023000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.521584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-023000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.748917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-023000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.324209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-023000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.713959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-023000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.373834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-023000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.968ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-023000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.549583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-023000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.306542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-023000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-023000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-023000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.575958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-023000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-023000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-023000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.145375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-023000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-023000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-023000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.473292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-023000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000: exit status 7 (30.176334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (97.41s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-023000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.775166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-023000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000: exit status 7 (30.136125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-023000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-023000 -v 3 --alsologtostderr: exit status 83 (40.923833ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-023000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-023000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:29:38.574578    7611 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:29:38.574741    7611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:29:38.574745    7611 out.go:304] Setting ErrFile to fd 2...
	I0719 07:29:38.574747    7611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:29:38.574876    7611 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:29:38.575122    7611 mustload.go:65] Loading cluster: multinode-023000
	I0719 07:29:38.575306    7611 config.go:182] Loaded profile config "multinode-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:29:38.579846    7611 out.go:177] * The control-plane node multinode-023000 host is not running: state=Stopped
	I0719 07:29:38.583816    7611 out.go:177]   To start a cluster, run: "minikube start -p multinode-023000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-023000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000: exit status 7 (29.306291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-023000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-023000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.271333ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-023000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-023000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-023000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000: exit status 7 (30.150416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-023000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-023000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-023000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-023000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000: exit status 7 (29.382875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-023000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-023000 status --output json --alsologtostderr: exit status 7 (29.871416ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-023000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:29:38.779558    7623 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:29:38.779721    7623 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:29:38.779724    7623 out.go:304] Setting ErrFile to fd 2...
	I0719 07:29:38.779727    7623 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:29:38.779853    7623 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:29:38.779971    7623 out.go:298] Setting JSON to true
	I0719 07:29:38.779981    7623 mustload.go:65] Loading cluster: multinode-023000
	I0719 07:29:38.780049    7623 notify.go:220] Checking for updates...
	I0719 07:29:38.780160    7623 config.go:182] Loaded profile config "multinode-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:29:38.780166    7623 status.go:255] checking status of multinode-023000 ...
	I0719 07:29:38.780384    7623 status.go:330] multinode-023000 host status = "Stopped" (err=<nil>)
	I0719 07:29:38.780388    7623 status.go:343] host is not running, skipping remaining checks
	I0719 07:29:38.780390    7623 status.go:257] multinode-023000 status: &{Name:multinode-023000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-023000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000: exit status 7 (29.29875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-023000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-023000 node stop m03: exit status 85 (47.413792ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-023000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-023000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-023000 status: exit status 7 (30.065708ms)

                                                
                                                
-- stdout --
	multinode-023000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-023000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-023000 status --alsologtostderr: exit status 7 (30.118292ms)

                                                
                                                
-- stdout --
	multinode-023000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:29:38.917226    7631 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:29:38.917428    7631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:29:38.917431    7631 out.go:304] Setting ErrFile to fd 2...
	I0719 07:29:38.917433    7631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:29:38.917558    7631 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:29:38.917679    7631 out.go:298] Setting JSON to false
	I0719 07:29:38.917689    7631 mustload.go:65] Loading cluster: multinode-023000
	I0719 07:29:38.917752    7631 notify.go:220] Checking for updates...
	I0719 07:29:38.917877    7631 config.go:182] Loaded profile config "multinode-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:29:38.917883    7631 status.go:255] checking status of multinode-023000 ...
	I0719 07:29:38.918098    7631 status.go:330] multinode-023000 host status = "Stopped" (err=<nil>)
	I0719 07:29:38.918102    7631 status.go:343] host is not running, skipping remaining checks
	I0719 07:29:38.918105    7631 status.go:257] multinode-023000 status: &{Name:multinode-023000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-023000 status --alsologtostderr": multinode-023000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000: exit status 7 (29.950042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-023000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-023000 node start m03 -v=7 --alsologtostderr: exit status 85 (48.342167ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:29:38.976889    7635 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:29:38.977482    7635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:29:38.977485    7635 out.go:304] Setting ErrFile to fd 2...
	I0719 07:29:38.977488    7635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:29:38.977649    7635 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:29:38.977883    7635 mustload.go:65] Loading cluster: multinode-023000
	I0719 07:29:38.978064    7635 config.go:182] Loaded profile config "multinode-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:29:38.982698    7635 out.go:177] 
	W0719 07:29:38.986671    7635 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0719 07:29:38.986676    7635 out.go:239] * 
	* 
	W0719 07:29:38.988704    7635 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:29:38.992523    7635 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0719 07:29:38.976889    7635 out.go:291] Setting OutFile to fd 1 ...
I0719 07:29:38.977482    7635 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:29:38.977485    7635 out.go:304] Setting ErrFile to fd 2...
I0719 07:29:38.977488    7635 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 07:29:38.977649    7635 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
I0719 07:29:38.977883    7635 mustload.go:65] Loading cluster: multinode-023000
I0719 07:29:38.978064    7635 config.go:182] Loaded profile config "multinode-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 07:29:38.982698    7635 out.go:177] 
W0719 07:29:38.986671    7635 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0719 07:29:38.986676    7635 out.go:239] * 
* 
W0719 07:29:38.988704    7635 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0719 07:29:38.992523    7635 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-023000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-023000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-023000 status -v=7 --alsologtostderr: exit status 7 (29.492375ms)

                                                
                                                
-- stdout --
	multinode-023000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:29:39.025499    7637 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:29:39.025640    7637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:29:39.025650    7637 out.go:304] Setting ErrFile to fd 2...
	I0719 07:29:39.025653    7637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:29:39.025778    7637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:29:39.025894    7637 out.go:298] Setting JSON to false
	I0719 07:29:39.025903    7637 mustload.go:65] Loading cluster: multinode-023000
	I0719 07:29:39.025967    7637 notify.go:220] Checking for updates...
	I0719 07:29:39.026134    7637 config.go:182] Loaded profile config "multinode-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:29:39.026140    7637 status.go:255] checking status of multinode-023000 ...
	I0719 07:29:39.026353    7637 status.go:330] multinode-023000 host status = "Stopped" (err=<nil>)
	I0719 07:29:39.026357    7637 status.go:343] host is not running, skipping remaining checks
	I0719 07:29:39.026359    7637 status.go:257] multinode-023000 status: &{Name:multinode-023000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-023000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-023000 status -v=7 --alsologtostderr: exit status 7 (72.640542ms)

                                                
                                                
-- stdout --
	multinode-023000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:29:39.797369    7639 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:29:39.797563    7639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:29:39.797567    7639 out.go:304] Setting ErrFile to fd 2...
	I0719 07:29:39.797570    7639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:29:39.797749    7639 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:29:39.797918    7639 out.go:298] Setting JSON to false
	I0719 07:29:39.797930    7639 mustload.go:65] Loading cluster: multinode-023000
	I0719 07:29:39.797978    7639 notify.go:220] Checking for updates...
	I0719 07:29:39.798194    7639 config.go:182] Loaded profile config "multinode-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:29:39.798203    7639 status.go:255] checking status of multinode-023000 ...
	I0719 07:29:39.798486    7639 status.go:330] multinode-023000 host status = "Stopped" (err=<nil>)
	I0719 07:29:39.798491    7639 status.go:343] host is not running, skipping remaining checks
	I0719 07:29:39.798494    7639 status.go:257] multinode-023000 status: &{Name:multinode-023000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-023000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-023000 status -v=7 --alsologtostderr: exit status 7 (75.385917ms)

                                                
                                                
-- stdout --
	multinode-023000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:29:41.394944    7641 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:29:41.395141    7641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:29:41.395145    7641 out.go:304] Setting ErrFile to fd 2...
	I0719 07:29:41.395148    7641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:29:41.395336    7641 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:29:41.395495    7641 out.go:298] Setting JSON to false
	I0719 07:29:41.395508    7641 mustload.go:65] Loading cluster: multinode-023000
	I0719 07:29:41.395547    7641 notify.go:220] Checking for updates...
	I0719 07:29:41.395768    7641 config.go:182] Loaded profile config "multinode-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:29:41.395775    7641 status.go:255] checking status of multinode-023000 ...
	I0719 07:29:41.396059    7641 status.go:330] multinode-023000 host status = "Stopped" (err=<nil>)
	I0719 07:29:41.396064    7641 status.go:343] host is not running, skipping remaining checks
	I0719 07:29:41.396067    7641 status.go:257] multinode-023000 status: &{Name:multinode-023000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-023000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-023000 status -v=7 --alsologtostderr: exit status 7 (74.26475ms)

                                                
                                                
-- stdout --
	multinode-023000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:29:44.374872    7643 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:29:44.375108    7643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:29:44.375112    7643 out.go:304] Setting ErrFile to fd 2...
	I0719 07:29:44.375115    7643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:29:44.375313    7643 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:29:44.375512    7643 out.go:298] Setting JSON to false
	I0719 07:29:44.375525    7643 mustload.go:65] Loading cluster: multinode-023000
	I0719 07:29:44.375572    7643 notify.go:220] Checking for updates...
	I0719 07:29:44.375809    7643 config.go:182] Loaded profile config "multinode-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:29:44.375818    7643 status.go:255] checking status of multinode-023000 ...
	I0719 07:29:44.376104    7643 status.go:330] multinode-023000 host status = "Stopped" (err=<nil>)
	I0719 07:29:44.376109    7643 status.go:343] host is not running, skipping remaining checks
	I0719 07:29:44.376112    7643 status.go:257] multinode-023000 status: &{Name:multinode-023000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-023000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-023000 status -v=7 --alsologtostderr: exit status 7 (70.425208ms)

                                                
                                                
-- stdout --
	multinode-023000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:29:48.603942    7648 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:29:48.604120    7648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:29:48.604124    7648 out.go:304] Setting ErrFile to fd 2...
	I0719 07:29:48.604127    7648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:29:48.604288    7648 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:29:48.604448    7648 out.go:298] Setting JSON to false
	I0719 07:29:48.604461    7648 mustload.go:65] Loading cluster: multinode-023000
	I0719 07:29:48.604506    7648 notify.go:220] Checking for updates...
	I0719 07:29:48.604704    7648 config.go:182] Loaded profile config "multinode-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:29:48.604711    7648 status.go:255] checking status of multinode-023000 ...
	I0719 07:29:48.605019    7648 status.go:330] multinode-023000 host status = "Stopped" (err=<nil>)
	I0719 07:29:48.605024    7648 status.go:343] host is not running, skipping remaining checks
	I0719 07:29:48.605027    7648 status.go:257] multinode-023000 status: &{Name:multinode-023000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-023000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-023000 status -v=7 --alsologtostderr: exit status 7 (72.6355ms)

                                                
                                                
-- stdout --
	multinode-023000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:29:55.740143    7650 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:29:55.740390    7650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:29:55.740394    7650 out.go:304] Setting ErrFile to fd 2...
	I0719 07:29:55.740397    7650 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:29:55.740590    7650 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:29:55.740768    7650 out.go:298] Setting JSON to false
	I0719 07:29:55.740782    7650 mustload.go:65] Loading cluster: multinode-023000
	I0719 07:29:55.740826    7650 notify.go:220] Checking for updates...
	I0719 07:29:55.741036    7650 config.go:182] Loaded profile config "multinode-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:29:55.741044    7650 status.go:255] checking status of multinode-023000 ...
	I0719 07:29:55.741360    7650 status.go:330] multinode-023000 host status = "Stopped" (err=<nil>)
	I0719 07:29:55.741366    7650 status.go:343] host is not running, skipping remaining checks
	I0719 07:29:55.741369    7650 status.go:257] multinode-023000 status: &{Name:multinode-023000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-023000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-023000 status -v=7 --alsologtostderr: exit status 7 (67.090416ms)

                                                
                                                
-- stdout --
	multinode-023000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:30:01.387591    7692 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:30:01.387793    7692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:30:01.387798    7692 out.go:304] Setting ErrFile to fd 2...
	I0719 07:30:01.387801    7692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:30:01.388020    7692 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:30:01.388222    7692 out.go:298] Setting JSON to false
	I0719 07:30:01.388237    7692 mustload.go:65] Loading cluster: multinode-023000
	I0719 07:30:01.388289    7692 notify.go:220] Checking for updates...
	I0719 07:30:01.388541    7692 config.go:182] Loaded profile config "multinode-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:30:01.388551    7692 status.go:255] checking status of multinode-023000 ...
	I0719 07:30:01.388869    7692 status.go:330] multinode-023000 host status = "Stopped" (err=<nil>)
	I0719 07:30:01.388874    7692 status.go:343] host is not running, skipping remaining checks
	I0719 07:30:01.388882    7692 status.go:257] multinode-023000 status: &{Name:multinode-023000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-023000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-023000 status -v=7 --alsologtostderr: exit status 7 (74.732042ms)

                                                
                                                
-- stdout --
	multinode-023000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:30:10.806867    7914 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:30:10.807075    7914 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:30:10.807079    7914 out.go:304] Setting ErrFile to fd 2...
	I0719 07:30:10.807082    7914 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:30:10.807266    7914 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:30:10.807419    7914 out.go:298] Setting JSON to false
	I0719 07:30:10.807431    7914 mustload.go:65] Loading cluster: multinode-023000
	I0719 07:30:10.807468    7914 notify.go:220] Checking for updates...
	I0719 07:30:10.807687    7914 config.go:182] Loaded profile config "multinode-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:30:10.807695    7914 status.go:255] checking status of multinode-023000 ...
	I0719 07:30:10.807987    7914 status.go:330] multinode-023000 host status = "Stopped" (err=<nil>)
	I0719 07:30:10.807992    7914 status.go:343] host is not running, skipping remaining checks
	I0719 07:30:10.807995    7914 status.go:257] multinode-023000 status: &{Name:multinode-023000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-023000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-023000 status -v=7 --alsologtostderr: exit status 7 (72.894334ms)

                                                
                                                
-- stdout --
	multinode-023000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:30:21.248146    7919 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:30:21.248359    7919 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:30:21.248364    7919 out.go:304] Setting ErrFile to fd 2...
	I0719 07:30:21.248367    7919 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:30:21.248557    7919 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:30:21.248744    7919 out.go:298] Setting JSON to false
	I0719 07:30:21.248758    7919 mustload.go:65] Loading cluster: multinode-023000
	I0719 07:30:21.248802    7919 notify.go:220] Checking for updates...
	I0719 07:30:21.249065    7919 config.go:182] Loaded profile config "multinode-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:30:21.249073    7919 status.go:255] checking status of multinode-023000 ...
	I0719 07:30:21.249369    7919 status.go:330] multinode-023000 host status = "Stopped" (err=<nil>)
	I0719 07:30:21.249374    7919 status.go:343] host is not running, skipping remaining checks
	I0719 07:30:21.249377    7919 status.go:257] multinode-023000 status: &{Name:multinode-023000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-023000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000: exit status 7 (33.678583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (42.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-023000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-023000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-023000: (4.06785075s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-023000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-023000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.223099542s)

                                                
                                                
-- stdout --
	* [multinode-023000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-023000" primary control-plane node in "multinode-023000" cluster
	* Restarting existing qemu2 VM for "multinode-023000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-023000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:30:25.446446    7945 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:30:25.446618    7945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:30:25.446622    7945 out.go:304] Setting ErrFile to fd 2...
	I0719 07:30:25.446625    7945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:30:25.446779    7945 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:30:25.448011    7945 out.go:298] Setting JSON to false
	I0719 07:30:25.467929    7945 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5394,"bootTime":1721394031,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:30:25.468009    7945 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:30:25.473054    7945 out.go:177] * [multinode-023000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:30:25.479985    7945 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:30:25.480028    7945 notify.go:220] Checking for updates...
	I0719 07:30:25.487920    7945 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:30:25.490943    7945 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:30:25.493956    7945 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:30:25.496997    7945 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:30:25.499954    7945 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:30:25.503352    7945 config.go:182] Loaded profile config "multinode-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:30:25.503419    7945 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:30:25.507902    7945 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 07:30:25.514932    7945 start.go:297] selected driver: qemu2
	I0719 07:30:25.514938    7945 start.go:901] validating driver "qemu2" against &{Name:multinode-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-023000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:30:25.514988    7945 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:30:25.517817    7945 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:30:25.517923    7945 cni.go:84] Creating CNI manager for ""
	I0719 07:30:25.517929    7945 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 07:30:25.517990    7945 start.go:340] cluster config:
	{Name:multinode-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-023000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:30:25.522181    7945 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:30:25.529924    7945 out.go:177] * Starting "multinode-023000" primary control-plane node in "multinode-023000" cluster
	I0719 07:30:25.532887    7945 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:30:25.532905    7945 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:30:25.532927    7945 cache.go:56] Caching tarball of preloaded images
	I0719 07:30:25.533007    7945 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:30:25.533013    7945 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:30:25.533083    7945 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/multinode-023000/config.json ...
	I0719 07:30:25.533485    7945 start.go:360] acquireMachinesLock for multinode-023000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:30:25.533524    7945 start.go:364] duration metric: took 31.333µs to acquireMachinesLock for "multinode-023000"
	I0719 07:30:25.533533    7945 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:30:25.533541    7945 fix.go:54] fixHost starting: 
	I0719 07:30:25.533667    7945 fix.go:112] recreateIfNeeded on multinode-023000: state=Stopped err=<nil>
	W0719 07:30:25.533678    7945 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:30:25.541797    7945 out.go:177] * Restarting existing qemu2 VM for "multinode-023000" ...
	I0719 07:30:25.545941    7945 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:30:25.545981    7945 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:75:af:37:06:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/disk.qcow2
	I0719 07:30:25.548342    7945 main.go:141] libmachine: STDOUT: 
	I0719 07:30:25.548363    7945 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:30:25.548392    7945 fix.go:56] duration metric: took 14.85075ms for fixHost
	I0719 07:30:25.548398    7945 start.go:83] releasing machines lock for "multinode-023000", held for 14.869375ms
	W0719 07:30:25.548411    7945 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:30:25.548444    7945 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:30:25.548449    7945 start.go:729] Will try again in 5 seconds ...
	I0719 07:30:30.550570    7945 start.go:360] acquireMachinesLock for multinode-023000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:30:30.550975    7945 start.go:364] duration metric: took 332.791µs to acquireMachinesLock for "multinode-023000"
	I0719 07:30:30.551099    7945 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:30:30.551116    7945 fix.go:54] fixHost starting: 
	I0719 07:30:30.551839    7945 fix.go:112] recreateIfNeeded on multinode-023000: state=Stopped err=<nil>
	W0719 07:30:30.551863    7945 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:30:30.556322    7945 out.go:177] * Restarting existing qemu2 VM for "multinode-023000" ...
	I0719 07:30:30.560086    7945 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:30:30.560291    7945 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:75:af:37:06:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/disk.qcow2
	I0719 07:30:30.567863    7945 main.go:141] libmachine: STDOUT: 
	I0719 07:30:30.567926    7945 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:30:30.568021    7945 fix.go:56] duration metric: took 16.905417ms for fixHost
	I0719 07:30:30.568039    7945 start.go:83] releasing machines lock for "multinode-023000", held for 17.046459ms
	W0719 07:30:30.568224    7945 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-023000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-023000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:30:30.576078    7945 out.go:177] 
	W0719 07:30:30.580259    7945 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:30:30.580292    7945 out.go:239] * 
	* 
	W0719 07:30:30.581788    7945 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:30:30.592246    7945 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-023000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-023000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000: exit status 7 (32.589375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.42s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-023000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-023000 node delete m03: exit status 83 (45.840333ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-023000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-023000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-023000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-023000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-023000 status --alsologtostderr: exit status 7 (30.41825ms)

                                                
                                                
-- stdout --
	multinode-023000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:30:30.778412    7960 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:30:30.778567    7960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:30:30.778570    7960 out.go:304] Setting ErrFile to fd 2...
	I0719 07:30:30.778573    7960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:30:30.778704    7960 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:30:30.778838    7960 out.go:298] Setting JSON to false
	I0719 07:30:30.778848    7960 mustload.go:65] Loading cluster: multinode-023000
	I0719 07:30:30.778905    7960 notify.go:220] Checking for updates...
	I0719 07:30:30.779038    7960 config.go:182] Loaded profile config "multinode-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:30:30.779044    7960 status.go:255] checking status of multinode-023000 ...
	I0719 07:30:30.779277    7960 status.go:330] multinode-023000 host status = "Stopped" (err=<nil>)
	I0719 07:30:30.779281    7960 status.go:343] host is not running, skipping remaining checks
	I0719 07:30:30.779283    7960 status.go:257] multinode-023000 status: &{Name:multinode-023000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-023000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000: exit status 7 (30.544916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-023000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-023000 stop: (3.379002333s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-023000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-023000 status: exit status 7 (68.401792ms)

                                                
                                                
-- stdout --
	multinode-023000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-023000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-023000 status --alsologtostderr: exit status 7 (31.921ms)

                                                
                                                
-- stdout --
	multinode-023000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:30:34.288970    7985 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:30:34.289102    7985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:30:34.289109    7985 out.go:304] Setting ErrFile to fd 2...
	I0719 07:30:34.289111    7985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:30:34.289254    7985 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:30:34.289369    7985 out.go:298] Setting JSON to false
	I0719 07:30:34.289384    7985 mustload.go:65] Loading cluster: multinode-023000
	I0719 07:30:34.289447    7985 notify.go:220] Checking for updates...
	I0719 07:30:34.289561    7985 config.go:182] Loaded profile config "multinode-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:30:34.289569    7985 status.go:255] checking status of multinode-023000 ...
	I0719 07:30:34.289785    7985 status.go:330] multinode-023000 host status = "Stopped" (err=<nil>)
	I0719 07:30:34.289789    7985 status.go:343] host is not running, skipping remaining checks
	I0719 07:30:34.289792    7985 status.go:257] multinode-023000 status: &{Name:multinode-023000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-023000 status --alsologtostderr": multinode-023000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-023000 status --alsologtostderr": multinode-023000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000: exit status 7 (29.939542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-023000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-023000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.186640667s)

                                                
                                                
-- stdout --
	* [multinode-023000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-023000" primary control-plane node in "multinode-023000" cluster
	* Restarting existing qemu2 VM for "multinode-023000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-023000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:30:34.347310    7989 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:30:34.347430    7989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:30:34.347433    7989 out.go:304] Setting ErrFile to fd 2...
	I0719 07:30:34.347436    7989 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:30:34.347567    7989 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:30:34.348659    7989 out.go:298] Setting JSON to false
	I0719 07:30:34.365329    7989 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5403,"bootTime":1721394031,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:30:34.365403    7989 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:30:34.370555    7989 out.go:177] * [multinode-023000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:30:34.378490    7989 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:30:34.378528    7989 notify.go:220] Checking for updates...
	I0719 07:30:34.386490    7989 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:30:34.389434    7989 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:30:34.392491    7989 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:30:34.395502    7989 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:30:34.398438    7989 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:30:34.401775    7989 config.go:182] Loaded profile config "multinode-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:30:34.402037    7989 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:30:34.406513    7989 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 07:30:34.413459    7989 start.go:297] selected driver: qemu2
	I0719 07:30:34.413465    7989 start.go:901] validating driver "qemu2" against &{Name:multinode-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-023000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:30:34.413519    7989 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:30:34.416006    7989 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:30:34.416027    7989 cni.go:84] Creating CNI manager for ""
	I0719 07:30:34.416033    7989 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 07:30:34.416082    7989 start.go:340] cluster config:
	{Name:multinode-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-023000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:30:34.419808    7989 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:30:34.425480    7989 out.go:177] * Starting "multinode-023000" primary control-plane node in "multinode-023000" cluster
	I0719 07:30:34.429528    7989 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:30:34.429545    7989 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:30:34.429561    7989 cache.go:56] Caching tarball of preloaded images
	I0719 07:30:34.429624    7989 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:30:34.429630    7989 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:30:34.429692    7989 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/multinode-023000/config.json ...
	I0719 07:30:34.429984    7989 start.go:360] acquireMachinesLock for multinode-023000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:30:34.430020    7989 start.go:364] duration metric: took 30.084µs to acquireMachinesLock for "multinode-023000"
	I0719 07:30:34.430028    7989 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:30:34.430036    7989 fix.go:54] fixHost starting: 
	I0719 07:30:34.430158    7989 fix.go:112] recreateIfNeeded on multinode-023000: state=Stopped err=<nil>
	W0719 07:30:34.430168    7989 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:30:34.440565    7989 out.go:177] * Restarting existing qemu2 VM for "multinode-023000" ...
	I0719 07:30:34.444496    7989 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:30:34.444539    7989 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:75:af:37:06:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/disk.qcow2
	I0719 07:30:34.446930    7989 main.go:141] libmachine: STDOUT: 
	I0719 07:30:34.446956    7989 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:30:34.446987    7989 fix.go:56] duration metric: took 16.951333ms for fixHost
	I0719 07:30:34.446993    7989 start.go:83] releasing machines lock for "multinode-023000", held for 16.968416ms
	W0719 07:30:34.446999    7989 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:30:34.447033    7989 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:30:34.447037    7989 start.go:729] Will try again in 5 seconds ...
	I0719 07:30:39.449253    7989 start.go:360] acquireMachinesLock for multinode-023000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:30:39.449650    7989 start.go:364] duration metric: took 298.125µs to acquireMachinesLock for "multinode-023000"
	I0719 07:30:39.449750    7989 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:30:39.449769    7989 fix.go:54] fixHost starting: 
	I0719 07:30:39.450466    7989 fix.go:112] recreateIfNeeded on multinode-023000: state=Stopped err=<nil>
	W0719 07:30:39.450494    7989 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:30:39.455097    7989 out.go:177] * Restarting existing qemu2 VM for "multinode-023000" ...
	I0719 07:30:39.464052    7989 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:30:39.464293    7989 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:75:af:37:06:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/multinode-023000/disk.qcow2
	I0719 07:30:39.473304    7989 main.go:141] libmachine: STDOUT: 
	I0719 07:30:39.473384    7989 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:30:39.473472    7989 fix.go:56] duration metric: took 23.699667ms for fixHost
	I0719 07:30:39.473498    7989 start.go:83] releasing machines lock for "multinode-023000", held for 23.824459ms
	W0719 07:30:39.473693    7989 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-023000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-023000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:30:39.481060    7989 out.go:177] 
	W0719 07:30:39.484178    7989 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:30:39.484207    7989 out.go:239] * 
	* 
	W0719 07:30:39.486904    7989 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:30:39.493875    7989 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-023000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000: exit status 7 (67.568625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-023000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-023000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-023000-m01 --driver=qemu2 : exit status 80 (9.759493792s)

                                                
                                                
-- stdout --
	* [multinode-023000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-023000-m01" primary control-plane node in "multinode-023000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-023000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-023000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-023000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-023000-m02 --driver=qemu2 : exit status 80 (9.993499709s)

                                                
                                                
-- stdout --
	* [multinode-023000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-023000-m02" primary control-plane node in "multinode-023000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-023000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-023000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-023000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-023000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-023000: exit status 83 (80.285042ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-023000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-023000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-023000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-023000 -n multinode-023000: exit status 7 (29.923208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-023000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.97s)

                                                
                                    
x
+
TestPreload (9.93s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-161000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-161000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.792401291s)

                                                
                                                
-- stdout --
	* [test-preload-161000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-161000" primary control-plane node in "test-preload-161000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-161000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:30:59.680576    8047 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:30:59.680698    8047 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:30:59.680701    8047 out.go:304] Setting ErrFile to fd 2...
	I0719 07:30:59.680703    8047 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:30:59.680843    8047 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:30:59.681884    8047 out.go:298] Setting JSON to false
	I0719 07:30:59.697938    8047 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5428,"bootTime":1721394031,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:30:59.698014    8047 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:30:59.702927    8047 out.go:177] * [test-preload-161000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:30:59.710172    8047 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:30:59.710214    8047 notify.go:220] Checking for updates...
	I0719 07:30:59.716015    8047 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:30:59.719029    8047 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:30:59.720461    8047 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:30:59.724060    8047 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:30:59.727024    8047 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:30:59.730420    8047 config.go:182] Loaded profile config "multinode-023000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:30:59.730471    8047 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:30:59.735007    8047 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:30:59.742106    8047 start.go:297] selected driver: qemu2
	I0719 07:30:59.742112    8047 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:30:59.742119    8047 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:30:59.744482    8047 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:30:59.748074    8047 out.go:177] * Automatically selected the socket_vmnet network
	I0719 07:30:59.751145    8047 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:30:59.751174    8047 cni.go:84] Creating CNI manager for ""
	I0719 07:30:59.751181    8047 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:30:59.751185    8047 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 07:30:59.751219    8047 start.go:340] cluster config:
	{Name:test-preload-161000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-161000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:30:59.755048    8047 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:30:59.763027    8047 out.go:177] * Starting "test-preload-161000" primary control-plane node in "test-preload-161000" cluster
	I0719 07:30:59.767022    8047 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0719 07:30:59.767089    8047 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/test-preload-161000/config.json ...
	I0719 07:30:59.767106    8047 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/test-preload-161000/config.json: {Name:mk91bcb24112c93168c1e335dcf1b90b5782be6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:30:59.767105    8047 cache.go:107] acquiring lock: {Name:mk92593876cf6800835c6d9e9859b03602ce730b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:30:59.767107    8047 cache.go:107] acquiring lock: {Name:mke6f71e88dd98217d256951d528a059ec9c3f0b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:30:59.767127    8047 cache.go:107] acquiring lock: {Name:mk42978ad94fbd63fa5c266f72536c28f26c0296 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:30:59.767288    8047 cache.go:107] acquiring lock: {Name:mk0107a26d4780958949db91a10a754772abd433 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:30:59.767335    8047 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0719 07:30:59.767335    8047 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0719 07:30:59.767103    8047 cache.go:107] acquiring lock: {Name:mk3ef163f3c672482ee9e0c33c95335c964e79c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:30:59.767369    8047 cache.go:107] acquiring lock: {Name:mke4c1da3bc700708181247786ec1e5242962112 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:30:59.767449    8047 cache.go:107] acquiring lock: {Name:mkd957e9134d71118423bdd21dec72cddd73069b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:30:59.767510    8047 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0719 07:30:59.767543    8047 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 07:30:59.767569    8047 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0719 07:30:59.767603    8047 start.go:360] acquireMachinesLock for test-preload-161000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:30:59.767611    8047 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0719 07:30:59.767358    8047 cache.go:107] acquiring lock: {Name:mk44bd3ba52d9d3a92075113f91dcd43ea97f106 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:30:59.767631    8047 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0719 07:30:59.767643    8047 start.go:364] duration metric: took 32.792µs to acquireMachinesLock for "test-preload-161000"
	I0719 07:30:59.767655    8047 start.go:93] Provisioning new machine with config: &{Name:test-preload-161000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-161000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:30:59.767689    8047 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:30:59.768167    8047 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 07:30:59.774998    8047 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 07:30:59.780478    8047 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 07:30:59.780994    8047 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0719 07:30:59.781099    8047 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0719 07:30:59.781163    8047 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0719 07:30:59.781230    8047 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0719 07:30:59.781268    8047 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0719 07:30:59.783131    8047 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0719 07:30:59.783203    8047 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 07:30:59.792798    8047 start.go:159] libmachine.API.Create for "test-preload-161000" (driver="qemu2")
	I0719 07:30:59.792819    8047 client.go:168] LocalClient.Create starting
	I0719 07:30:59.792883    8047 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:30:59.792913    8047 main.go:141] libmachine: Decoding PEM data...
	I0719 07:30:59.792922    8047 main.go:141] libmachine: Parsing certificate...
	I0719 07:30:59.792964    8047 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:30:59.792993    8047 main.go:141] libmachine: Decoding PEM data...
	I0719 07:30:59.793012    8047 main.go:141] libmachine: Parsing certificate...
	I0719 07:30:59.793389    8047 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:30:59.922476    8047 main.go:141] libmachine: Creating SSH key...
	I0719 07:31:00.056976    8047 main.go:141] libmachine: Creating Disk image...
	I0719 07:31:00.057004    8047 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:31:00.057227    8047 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/test-preload-161000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/test-preload-161000/disk.qcow2
	I0719 07:31:00.066763    8047 main.go:141] libmachine: STDOUT: 
	I0719 07:31:00.066788    8047 main.go:141] libmachine: STDERR: 
	I0719 07:31:00.066868    8047 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/test-preload-161000/disk.qcow2 +20000M
	I0719 07:31:00.076117    8047 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:31:00.076137    8047 main.go:141] libmachine: STDERR: 
	I0719 07:31:00.076146    8047 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/test-preload-161000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/test-preload-161000/disk.qcow2
	I0719 07:31:00.076150    8047 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:31:00.076161    8047 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:31:00.076190    8047 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/test-preload-161000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/test-preload-161000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/test-preload-161000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:b6:b4:71:c9:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/test-preload-161000/disk.qcow2
	I0719 07:31:00.077924    8047 main.go:141] libmachine: STDOUT: 
	I0719 07:31:00.077938    8047 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:31:00.077957    8047 client.go:171] duration metric: took 285.136208ms to LocalClient.Create
	I0719 07:31:00.307723    8047 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0719 07:31:00.311024    8047 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0719 07:31:00.317699    8047 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0719 07:31:00.319364    8047 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0719 07:31:00.319397    8047 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0719 07:31:00.321283    8047 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0719 07:31:00.325385    8047 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0719 07:31:00.328854    8047 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0719 07:31:00.432113    8047 cache.go:157] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0719 07:31:00.432179    8047 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 664.909958ms
	I0719 07:31:00.432210    8047 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0719 07:31:00.547217    8047 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0719 07:31:00.547304    8047 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0719 07:31:00.804215    8047 cache.go:157] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0719 07:31:00.804269    8047 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.037167792s
	I0719 07:31:00.804295    8047 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0719 07:31:02.078343    8047 start.go:128] duration metric: took 2.310638417s to createHost
	I0719 07:31:02.078402    8047 start.go:83] releasing machines lock for "test-preload-161000", held for 2.310764959s
	W0719 07:31:02.078473    8047 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:31:02.089500    8047 out.go:177] * Deleting "test-preload-161000" in qemu2 ...
	W0719 07:31:02.110227    8047 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:31:02.110256    8047 start.go:729] Will try again in 5 seconds ...
	I0719 07:31:02.539138    8047 cache.go:157] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0719 07:31:02.539185    8047 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.77192275s
	I0719 07:31:02.539213    8047 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0719 07:31:03.240846    8047 cache.go:157] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0719 07:31:03.240900    8047 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.473604625s
	I0719 07:31:03.240930    8047 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0719 07:31:03.816021    8047 cache.go:157] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0719 07:31:03.816078    8047 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.048969583s
	I0719 07:31:03.816105    8047 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0719 07:31:04.401454    8047 cache.go:157] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0719 07:31:04.401509    8047 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.634438209s
	I0719 07:31:04.401542    8047 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0719 07:31:05.777686    8047 cache.go:157] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0719 07:31:05.777735    8047 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.01066975s
	I0719 07:31:05.777786    8047 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0719 07:31:07.110387    8047 start.go:360] acquireMachinesLock for test-preload-161000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:31:07.110830    8047 start.go:364] duration metric: took 368.875µs to acquireMachinesLock for "test-preload-161000"
	I0719 07:31:07.110946    8047 start.go:93] Provisioning new machine with config: &{Name:test-preload-161000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-161000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:31:07.111197    8047 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:31:07.119703    8047 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 07:31:07.169582    8047 start.go:159] libmachine.API.Create for "test-preload-161000" (driver="qemu2")
	I0719 07:31:07.169712    8047 client.go:168] LocalClient.Create starting
	I0719 07:31:07.169858    8047 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:31:07.169927    8047 main.go:141] libmachine: Decoding PEM data...
	I0719 07:31:07.169950    8047 main.go:141] libmachine: Parsing certificate...
	I0719 07:31:07.170018    8047 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:31:07.170063    8047 main.go:141] libmachine: Decoding PEM data...
	I0719 07:31:07.170079    8047 main.go:141] libmachine: Parsing certificate...
	I0719 07:31:07.170598    8047 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:31:07.298510    8047 main.go:141] libmachine: Creating SSH key...
	I0719 07:31:07.380631    8047 main.go:141] libmachine: Creating Disk image...
	I0719 07:31:07.380636    8047 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:31:07.380828    8047 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/test-preload-161000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/test-preload-161000/disk.qcow2
	I0719 07:31:07.390266    8047 main.go:141] libmachine: STDOUT: 
	I0719 07:31:07.390285    8047 main.go:141] libmachine: STDERR: 
	I0719 07:31:07.390339    8047 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/test-preload-161000/disk.qcow2 +20000M
	I0719 07:31:07.398554    8047 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:31:07.398572    8047 main.go:141] libmachine: STDERR: 
	I0719 07:31:07.398581    8047 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/test-preload-161000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/test-preload-161000/disk.qcow2
	I0719 07:31:07.398591    8047 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:31:07.398597    8047 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:31:07.398635    8047 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/test-preload-161000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/test-preload-161000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/test-preload-161000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:06:bd:3f:0a:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/test-preload-161000/disk.qcow2
	I0719 07:31:07.400453    8047 main.go:141] libmachine: STDOUT: 
	I0719 07:31:07.400470    8047 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:31:07.400483    8047 client.go:171] duration metric: took 230.766083ms to LocalClient.Create
	I0719 07:31:08.827461    8047 cache.go:157] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0719 07:31:08.827528    8047 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.060310334s
	I0719 07:31:08.827570    8047 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0719 07:31:08.827633    8047 cache.go:87] Successfully saved all images to host disk.
	I0719 07:31:09.402638    8047 start.go:128] duration metric: took 2.291404833s to createHost
	I0719 07:31:09.402726    8047 start.go:83] releasing machines lock for "test-preload-161000", held for 2.291887541s
	W0719 07:31:09.402997    8047 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-161000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-161000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:31:09.414619    8047 out.go:177] 
	W0719 07:31:09.417598    8047 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:31:09.417625    8047 out.go:239] * 
	* 
	W0719 07:31:09.420179    8047 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:31:09.429507    8047 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-161000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-19 07:31:09.44829 -0700 PDT m=+698.723816793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-161000 -n test-preload-161000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-161000 -n test-preload-161000: exit status 7 (66.239791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-161000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-161000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-161000
--- FAIL: TestPreload (9.93s)

                                                
                                    
x
+
TestScheduledStopUnix (9.93s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-601000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-601000 --memory=2048 --driver=qemu2 : exit status 80 (9.787369875s)

                                                
                                                
-- stdout --
	* [scheduled-stop-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-601000" primary control-plane node in "scheduled-stop-601000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-601000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-601000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-601000" primary control-plane node in "scheduled-stop-601000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-601000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-19 07:31:19.372736 -0700 PDT m=+708.648329876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-601000 -n scheduled-stop-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-601000 -n scheduled-stop-601000: exit status 7 (67.839208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-601000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-601000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-601000
--- FAIL: TestScheduledStopUnix (9.93s)

                                                
                                    
x
+
TestSkaffold (12.47s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe508053187 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe508053187 version: (1.06505775s)
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-562000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-562000 --memory=2600 --driver=qemu2 : exit status 80 (10.157458333s)

                                                
                                                
-- stdout --
	* [skaffold-562000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-562000" primary control-plane node in "skaffold-562000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-562000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-562000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-562000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-562000" primary control-plane node in "skaffold-562000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-562000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-562000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-19 07:31:31.852481 -0700 PDT m=+721.128158751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-562000 -n skaffold-562000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-562000 -n skaffold-562000: exit status 7 (60.036833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-562000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-562000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-562000
--- FAIL: TestSkaffold (12.47s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (585.94s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3394300815 start -p running-upgrade-059000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.3394300815 start -p running-upgrade-059000 --memory=2200 --vm-driver=qemu2 : (50.35232225s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-059000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-059000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m22.239333208s)

                                                
                                                
-- stdout --
	* [running-upgrade-059000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-059000" primary control-plane node in "running-upgrade-059000" cluster
	* Updating the running qemu2 "running-upgrade-059000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:33:03.445544    8434 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:33:03.445724    8434 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:33:03.445728    8434 out.go:304] Setting ErrFile to fd 2...
	I0719 07:33:03.445730    8434 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:33:03.445864    8434 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:33:03.447230    8434 out.go:298] Setting JSON to false
	I0719 07:33:03.463865    8434 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5552,"bootTime":1721394031,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:33:03.463938    8434 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:33:03.468311    8434 out.go:177] * [running-upgrade-059000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:33:03.476278    8434 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:33:03.476329    8434 notify.go:220] Checking for updates...
	I0719 07:33:03.484139    8434 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:33:03.488224    8434 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:33:03.491202    8434 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:33:03.500169    8434 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:33:03.508252    8434 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:33:03.511469    8434 config.go:182] Loaded profile config "running-upgrade-059000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:33:03.514151    8434 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0719 07:33:03.518222    8434 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:33:03.522175    8434 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 07:33:03.530210    8434 start.go:297] selected driver: qemu2
	I0719 07:33:03.530218    8434 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-059000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51189 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-059000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0719 07:33:03.530262    8434 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:33:03.532536    8434 cni.go:84] Creating CNI manager for ""
	I0719 07:33:03.532553    8434 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:33:03.532572    8434 start.go:340] cluster config:
	{Name:running-upgrade-059000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51189 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-059000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0719 07:33:03.532624    8434 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:33:03.541212    8434 out.go:177] * Starting "running-upgrade-059000" primary control-plane node in "running-upgrade-059000" cluster
	I0719 07:33:03.545210    8434 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0719 07:33:03.545227    8434 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0719 07:33:03.545239    8434 cache.go:56] Caching tarball of preloaded images
	I0719 07:33:03.545293    8434 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:33:03.545299    8434 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0719 07:33:03.545366    8434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/config.json ...
	I0719 07:33:03.545768    8434 start.go:360] acquireMachinesLock for running-upgrade-059000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:33:03.545797    8434 start.go:364] duration metric: took 23.25µs to acquireMachinesLock for "running-upgrade-059000"
	I0719 07:33:03.545805    8434 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:33:03.545811    8434 fix.go:54] fixHost starting: 
	I0719 07:33:03.546451    8434 fix.go:112] recreateIfNeeded on running-upgrade-059000: state=Running err=<nil>
	W0719 07:33:03.546459    8434 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:33:03.550232    8434 out.go:177] * Updating the running qemu2 "running-upgrade-059000" VM ...
	I0719 07:33:03.558146    8434 machine.go:94] provisionDockerMachine start ...
	I0719 07:33:03.558179    8434 main.go:141] libmachine: Using SSH client type: native
	I0719 07:33:03.558301    8434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051c2a10] 0x1051c5270 <nil>  [] 0s} localhost 51157 <nil> <nil>}
	I0719 07:33:03.558306    8434 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 07:33:03.616728    8434 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-059000
	
	I0719 07:33:03.616741    8434 buildroot.go:166] provisioning hostname "running-upgrade-059000"
	I0719 07:33:03.616786    8434 main.go:141] libmachine: Using SSH client type: native
	I0719 07:33:03.616900    8434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051c2a10] 0x1051c5270 <nil>  [] 0s} localhost 51157 <nil> <nil>}
	I0719 07:33:03.616909    8434 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-059000 && echo "running-upgrade-059000" | sudo tee /etc/hostname
	I0719 07:33:03.678359    8434 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-059000
	
	I0719 07:33:03.678417    8434 main.go:141] libmachine: Using SSH client type: native
	I0719 07:33:03.678540    8434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051c2a10] 0x1051c5270 <nil>  [] 0s} localhost 51157 <nil> <nil>}
	I0719 07:33:03.678552    8434 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-059000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-059000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-059000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 07:33:03.734970    8434 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 07:33:03.734983    8434 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-5980/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-5980/.minikube}
	I0719 07:33:03.734995    8434 buildroot.go:174] setting up certificates
	I0719 07:33:03.734999    8434 provision.go:84] configureAuth start
	I0719 07:33:03.735003    8434 provision.go:143] copyHostCerts
	I0719 07:33:03.735072    8434 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-5980/.minikube/ca.pem, removing ...
	I0719 07:33:03.735090    8434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-5980/.minikube/ca.pem
	I0719 07:33:03.735214    8434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-5980/.minikube/ca.pem (1078 bytes)
	I0719 07:33:03.735392    8434 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-5980/.minikube/cert.pem, removing ...
	I0719 07:33:03.735395    8434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-5980/.minikube/cert.pem
	I0719 07:33:03.735442    8434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-5980/.minikube/cert.pem (1123 bytes)
	I0719 07:33:03.735540    8434 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-5980/.minikube/key.pem, removing ...
	I0719 07:33:03.735543    8434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-5980/.minikube/key.pem
	I0719 07:33:03.735582    8434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-5980/.minikube/key.pem (1679 bytes)
	I0719 07:33:03.735663    8434 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-059000 san=[127.0.0.1 localhost minikube running-upgrade-059000]
	I0719 07:33:03.851510    8434 provision.go:177] copyRemoteCerts
	I0719 07:33:03.851556    8434 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 07:33:03.851564    8434 sshutil.go:53] new ssh client: &{IP:localhost Port:51157 SSHKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/running-upgrade-059000/id_rsa Username:docker}
	I0719 07:33:03.883643    8434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 07:33:03.890479    8434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0719 07:33:03.897565    8434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 07:33:03.904568    8434 provision.go:87] duration metric: took 169.56525ms to configureAuth
	I0719 07:33:03.904577    8434 buildroot.go:189] setting minikube options for container-runtime
	I0719 07:33:03.904693    8434 config.go:182] Loaded profile config "running-upgrade-059000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:33:03.904739    8434 main.go:141] libmachine: Using SSH client type: native
	I0719 07:33:03.904828    8434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051c2a10] 0x1051c5270 <nil>  [] 0s} localhost 51157 <nil> <nil>}
	I0719 07:33:03.904832    8434 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 07:33:03.963538    8434 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 07:33:03.963549    8434 buildroot.go:70] root file system type: tmpfs
	I0719 07:33:03.963599    8434 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 07:33:03.963650    8434 main.go:141] libmachine: Using SSH client type: native
	I0719 07:33:03.963765    8434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051c2a10] 0x1051c5270 <nil>  [] 0s} localhost 51157 <nil> <nil>}
	I0719 07:33:03.963797    8434 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 07:33:04.026999    8434 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 07:33:04.027051    8434 main.go:141] libmachine: Using SSH client type: native
	I0719 07:33:04.027172    8434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051c2a10] 0x1051c5270 <nil>  [] 0s} localhost 51157 <nil> <nil>}
	I0719 07:33:04.027180    8434 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 07:33:04.089103    8434 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 07:33:04.089116    8434 machine.go:97] duration metric: took 530.96825ms to provisionDockerMachine
	I0719 07:33:04.089121    8434 start.go:293] postStartSetup for "running-upgrade-059000" (driver="qemu2")
	I0719 07:33:04.089127    8434 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 07:33:04.089180    8434 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 07:33:04.089187    8434 sshutil.go:53] new ssh client: &{IP:localhost Port:51157 SSHKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/running-upgrade-059000/id_rsa Username:docker}
	I0719 07:33:04.121011    8434 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 07:33:04.122638    8434 info.go:137] Remote host: Buildroot 2021.02.12
	I0719 07:33:04.122645    8434 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-5980/.minikube/addons for local assets ...
	I0719 07:33:04.122718    8434 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-5980/.minikube/files for local assets ...
	I0719 07:33:04.122812    8434 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-5980/.minikube/files/etc/ssl/certs/64732.pem -> 64732.pem in /etc/ssl/certs
	I0719 07:33:04.122912    8434 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 07:33:04.125586    8434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/files/etc/ssl/certs/64732.pem --> /etc/ssl/certs/64732.pem (1708 bytes)
	I0719 07:33:04.133441    8434 start.go:296] duration metric: took 44.314125ms for postStartSetup
	I0719 07:33:04.133457    8434 fix.go:56] duration metric: took 587.651042ms for fixHost
	I0719 07:33:04.133501    8434 main.go:141] libmachine: Using SSH client type: native
	I0719 07:33:04.133614    8434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1051c2a10] 0x1051c5270 <nil>  [] 0s} localhost 51157 <nil> <nil>}
	I0719 07:33:04.133618    8434 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0719 07:33:04.189044    8434 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721399583.851600555
	
	I0719 07:33:04.189055    8434 fix.go:216] guest clock: 1721399583.851600555
	I0719 07:33:04.189059    8434 fix.go:229] Guest: 2024-07-19 07:33:03.851600555 -0700 PDT Remote: 2024-07-19 07:33:04.133459 -0700 PDT m=+0.708165251 (delta=-281.858445ms)
	I0719 07:33:04.189072    8434 fix.go:200] guest clock delta is within tolerance: -281.858445ms
	I0719 07:33:04.189075    8434 start.go:83] releasing machines lock for "running-upgrade-059000", held for 643.278042ms
	I0719 07:33:04.189135    8434 ssh_runner.go:195] Run: cat /version.json
	I0719 07:33:04.189144    8434 sshutil.go:53] new ssh client: &{IP:localhost Port:51157 SSHKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/running-upgrade-059000/id_rsa Username:docker}
	I0719 07:33:04.189153    8434 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 07:33:04.189169    8434 sshutil.go:53] new ssh client: &{IP:localhost Port:51157 SSHKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/running-upgrade-059000/id_rsa Username:docker}
	W0719 07:33:04.189745    8434 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:51264->127.0.0.1:51157: read: connection reset by peer
	I0719 07:33:04.189766    8434 retry.go:31] will retry after 373.693054ms: ssh: handshake failed: read tcp 127.0.0.1:51264->127.0.0.1:51157: read: connection reset by peer
	W0719 07:33:04.219573    8434 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0719 07:33:04.219629    8434 ssh_runner.go:195] Run: systemctl --version
	I0719 07:33:04.221659    8434 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 07:33:04.223474    8434 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 07:33:04.223497    8434 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0719 07:33:04.226695    8434 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0719 07:33:04.231562    8434 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 07:33:04.231570    8434 start.go:495] detecting cgroup driver to use...
	I0719 07:33:04.231677    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 07:33:04.237042    8434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0719 07:33:04.240064    8434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 07:33:04.242995    8434 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 07:33:04.243022    8434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 07:33:04.246549    8434 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 07:33:04.249896    8434 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 07:33:04.253260    8434 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 07:33:04.256322    8434 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 07:33:04.259202    8434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 07:33:04.262506    8434 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 07:33:04.265817    8434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 07:33:04.268979    8434 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 07:33:04.271734    8434 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 07:33:04.274654    8434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 07:33:04.354222    8434 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 07:33:04.365106    8434 start.go:495] detecting cgroup driver to use...
	I0719 07:33:04.365177    8434 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 07:33:04.369977    8434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 07:33:04.376355    8434 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 07:33:04.384602    8434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 07:33:04.389853    8434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 07:33:04.395133    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 07:33:04.400847    8434 ssh_runner.go:195] Run: which cri-dockerd
	I0719 07:33:04.402186    8434 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 07:33:04.405014    8434 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 07:33:04.409762    8434 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 07:33:04.496862    8434 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 07:33:04.574458    8434 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 07:33:04.574509    8434 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 07:33:04.581743    8434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 07:33:04.661244    8434 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 07:33:06.372615    8434 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.711366042s)
	I0719 07:33:06.372667    8434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 07:33:06.377111    8434 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0719 07:33:06.382801    8434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 07:33:06.387662    8434 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 07:33:06.469495    8434 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 07:33:06.531490    8434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 07:33:06.588103    8434 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 07:33:06.593641    8434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 07:33:06.597959    8434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 07:33:06.660653    8434 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 07:33:06.698836    8434 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 07:33:06.698909    8434 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 07:33:06.701435    8434 start.go:563] Will wait 60s for crictl version
	I0719 07:33:06.701484    8434 ssh_runner.go:195] Run: which crictl
	I0719 07:33:06.702929    8434 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 07:33:06.716139    8434 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0719 07:33:06.716206    8434 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 07:33:06.728372    8434 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 07:33:06.747778    8434 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0719 07:33:06.747848    8434 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0719 07:33:06.749178    8434 kubeadm.go:883] updating cluster {Name:running-upgrade-059000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51189 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-059000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0719 07:33:06.749223    8434 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0719 07:33:06.749257    8434 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 07:33:06.760832    8434 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 07:33:06.760840    8434 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0719 07:33:06.760885    8434 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 07:33:06.763881    8434 ssh_runner.go:195] Run: which lz4
	I0719 07:33:06.765147    8434 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0719 07:33:06.766422    8434 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 07:33:06.766433    8434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0719 07:33:07.659831    8434 docker.go:649] duration metric: took 894.725333ms to copy over tarball
	I0719 07:33:07.659889    8434 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 07:33:08.950274    8434 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.290382667s)
	I0719 07:33:08.950287    8434 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 07:33:08.966394    8434 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 07:33:08.969835    8434 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0719 07:33:08.974562    8434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 07:33:09.039500    8434 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 07:33:10.255487    8434 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.215977333s)
	I0719 07:33:10.255592    8434 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 07:33:10.270676    8434 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 07:33:10.270685    8434 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0719 07:33:10.270689    8434 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 07:33:10.274649    8434 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 07:33:10.276246    8434 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0719 07:33:10.278736    8434 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0719 07:33:10.278797    8434 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 07:33:10.281099    8434 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0719 07:33:10.281030    8434 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 07:33:10.282759    8434 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0719 07:33:10.282826    8434 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0719 07:33:10.284381    8434 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 07:33:10.284419    8434 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 07:33:10.285384    8434 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0719 07:33:10.285975    8434 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0719 07:33:10.286433    8434 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0719 07:33:10.287095    8434 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 07:33:10.288115    8434 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0719 07:33:10.288133    8434 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0719 07:33:10.691947    8434 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0719 07:33:10.702260    8434 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 07:33:10.707975    8434 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0719 07:33:10.713300    8434 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0719 07:33:10.713331    8434 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0719 07:33:10.713386    8434 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0719 07:33:10.718003    8434 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0719 07:33:10.723219    8434 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0719 07:33:10.723242    8434 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 07:33:10.723290    8434 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 07:33:10.727164    8434 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0719 07:33:10.727188    8434 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0719 07:33:10.727237    8434 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0719 07:33:10.734139    8434 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0719 07:33:10.737712    8434 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0719 07:33:10.737843    8434 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0719 07:33:10.747196    8434 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0719 07:33:10.747278    8434 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0719 07:33:10.747294    8434 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0719 07:33:10.747345    8434 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0719 07:33:10.753753    8434 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0719 07:33:10.755587    8434 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0719 07:33:10.756611    8434 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0719 07:33:10.772536    8434 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0719 07:33:10.772555    8434 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0719 07:33:10.772609    8434 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0719 07:33:10.772613    8434 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0719 07:33:10.772623    8434 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 07:33:10.772646    8434 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0719 07:33:10.776255    8434 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0719 07:33:10.777896    8434 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0719 07:33:10.777912    8434 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0719 07:33:10.777959    8434 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0719 07:33:10.791997    8434 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0719 07:33:10.792016    8434 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0719 07:33:10.792122    8434 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0719 07:33:10.792122    8434 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0719 07:33:10.795018    8434 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0719 07:33:10.795107    8434 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0719 07:33:10.796157    8434 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0719 07:33:10.796168    8434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0719 07:33:10.796401    8434 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0719 07:33:10.796409    8434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0719 07:33:10.796673    8434 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0719 07:33:10.796682    8434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0719 07:33:10.816035    8434 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0719 07:33:10.816056    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0719 07:33:10.869210    8434 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0719 07:33:10.869320    8434 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 07:33:10.884889    8434 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0719 07:33:10.908425    8434 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0719 07:33:10.908442    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0719 07:33:10.916702    8434 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0719 07:33:10.916724    8434 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 07:33:10.916785    8434 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 07:33:11.043427    8434 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0719 07:33:11.130780    8434 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0719 07:33:11.130807    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0719 07:33:11.482207    8434 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0719 07:33:11.482254    8434 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0719 07:33:11.482400    8434 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0719 07:33:11.484822    8434 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0719 07:33:11.484839    8434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0719 07:33:11.524669    8434 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0719 07:33:11.524683    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0719 07:33:11.757912    8434 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0719 07:33:11.757949    8434 cache_images.go:92] duration metric: took 1.487264167s to LoadCachedImages
	W0719 07:33:11.757994    8434 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0719 07:33:11.758001    8434 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0719 07:33:11.758060    8434 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-059000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-059000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 07:33:11.758134    8434 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 07:33:11.771787    8434 cni.go:84] Creating CNI manager for ""
	I0719 07:33:11.771800    8434 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:33:11.771805    8434 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 07:33:11.771814    8434 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-059000 NodeName:running-upgrade-059000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 07:33:11.771884    8434 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-059000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 07:33:11.771934    8434 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0719 07:33:11.775152    8434 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 07:33:11.775183    8434 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 07:33:11.778285    8434 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0719 07:33:11.783503    8434 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 07:33:11.788626    8434 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0719 07:33:11.793591    8434 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0719 07:33:11.795104    8434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 07:33:11.861921    8434 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 07:33:11.867589    8434 certs.go:68] Setting up /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000 for IP: 10.0.2.15
	I0719 07:33:11.867596    8434 certs.go:194] generating shared ca certs ...
	I0719 07:33:11.867604    8434 certs.go:226] acquiring lock for ca certs: {Name:mk9d0c6de3978c1656d7567742ecf2a49cbc189d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:33:11.867836    8434 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/ca.key
	I0719 07:33:11.867871    8434 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/proxy-client-ca.key
	I0719 07:33:11.867875    8434 certs.go:256] generating profile certs ...
	I0719 07:33:11.867931    8434 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/client.key
	I0719 07:33:11.867943    8434 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/apiserver.key.76394198
	I0719 07:33:11.867954    8434 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/apiserver.crt.76394198 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0719 07:33:11.955761    8434 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/apiserver.crt.76394198 ...
	I0719 07:33:11.955766    8434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/apiserver.crt.76394198: {Name:mk53e1ab0aba3fe4954addd139f427221aaf27b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:33:11.955982    8434 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/apiserver.key.76394198 ...
	I0719 07:33:11.955986    8434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/apiserver.key.76394198: {Name:mk5228eaf79bcfac43fef6e97025b8f378dbe3b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:33:11.956108    8434 certs.go:381] copying /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/apiserver.crt.76394198 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/apiserver.crt
	I0719 07:33:11.956279    8434 certs.go:385] copying /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/apiserver.key.76394198 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/apiserver.key
	I0719 07:33:11.956401    8434 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/proxy-client.key
	I0719 07:33:11.956525    8434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/6473.pem (1338 bytes)
	W0719 07:33:11.956548    8434 certs.go:480] ignoring /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/6473_empty.pem, impossibly tiny 0 bytes
	I0719 07:33:11.956553    8434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 07:33:11.956604    8434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem (1078 bytes)
	I0719 07:33:11.956630    8434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem (1123 bytes)
	I0719 07:33:11.956649    8434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/key.pem (1679 bytes)
	I0719 07:33:11.956697    8434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/files/etc/ssl/certs/64732.pem (1708 bytes)
	I0719 07:33:11.957035    8434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 07:33:11.964152    8434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 07:33:11.971287    8434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 07:33:11.978664    8434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 07:33:11.985947    8434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 07:33:11.993389    8434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 07:33:12.000801    8434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 07:33:12.007619    8434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 07:33:12.014609    8434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/6473.pem --> /usr/share/ca-certificates/6473.pem (1338 bytes)
	I0719 07:33:12.022035    8434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/files/etc/ssl/certs/64732.pem --> /usr/share/ca-certificates/64732.pem (1708 bytes)
	I0719 07:33:12.031243    8434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 07:33:12.038511    8434 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 07:33:12.043677    8434 ssh_runner.go:195] Run: openssl version
	I0719 07:33:12.045565    8434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 07:33:12.048805    8434 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 07:33:12.050241    8434 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:32 /usr/share/ca-certificates/minikubeCA.pem
	I0719 07:33:12.050264    8434 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 07:33:12.052111    8434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 07:33:12.054856    8434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6473.pem && ln -fs /usr/share/ca-certificates/6473.pem /etc/ssl/certs/6473.pem"
	I0719 07:33:12.059538    8434 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6473.pem
	I0719 07:33:12.061182    8434 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:20 /usr/share/ca-certificates/6473.pem
	I0719 07:33:12.061205    8434 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6473.pem
	I0719 07:33:12.063201    8434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6473.pem /etc/ssl/certs/51391683.0"
	I0719 07:33:12.065992    8434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64732.pem && ln -fs /usr/share/ca-certificates/64732.pem /etc/ssl/certs/64732.pem"
	I0719 07:33:12.069431    8434 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/64732.pem
	I0719 07:33:12.070920    8434 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:20 /usr/share/ca-certificates/64732.pem
	I0719 07:33:12.070940    8434 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64732.pem
	I0719 07:33:12.072913    8434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64732.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 07:33:12.075522    8434 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 07:33:12.077093    8434 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 07:33:12.078865    8434 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 07:33:12.080753    8434 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 07:33:12.082535    8434 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 07:33:12.084678    8434 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 07:33:12.086476    8434 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 07:33:12.088506    8434 kubeadm.go:392] StartCluster: {Name:running-upgrade-059000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51189 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-059000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0719 07:33:12.088577    8434 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 07:33:12.099101    8434 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 07:33:12.102733    8434 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 07:33:12.102744    8434 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 07:33:12.102772    8434 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 07:33:12.105941    8434 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 07:33:12.105978    8434 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-059000" does not appear in /Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:33:12.105994    8434 kubeconfig.go:62] /Users/jenkins/minikube-integration/19302-5980/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-059000" cluster setting kubeconfig missing "running-upgrade-059000" context setting]
	I0719 07:33:12.106165    8434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/kubeconfig: {Name:mk0c17b3830610cdae4c834f6bae9631cabc7388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:33:12.107110    8434 kapi.go:59] client config for running-upgrade-059000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-5980/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106557790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 07:33:12.108014    8434 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 07:33:12.110821    8434 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-059000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0719 07:33:12.110828    8434 kubeadm.go:1160] stopping kube-system containers ...
	I0719 07:33:12.110866    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 07:33:12.122011    8434 docker.go:483] Stopping containers: [85e6ef1cd493 a8f97f897146 a4d6590c67b0 ba0810a639e2 f8f3db7d9f58 b2c79da72382 d915d0b15229 3fe4e035dfe4 6510bef59285 a78ef0d335e6 57e184715c0f d7568fc9ea9d c8a9623eed77]
	I0719 07:33:12.122081    8434 ssh_runner.go:195] Run: docker stop 85e6ef1cd493 a8f97f897146 a4d6590c67b0 ba0810a639e2 f8f3db7d9f58 b2c79da72382 d915d0b15229 3fe4e035dfe4 6510bef59285 a78ef0d335e6 57e184715c0f d7568fc9ea9d c8a9623eed77
	I0719 07:33:13.004798    8434 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 07:33:13.122853    8434 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 07:33:13.126846    8434 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Jul 19 14:32 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Jul 19 14:32 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 19 14:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Jul 19 14:32 /etc/kubernetes/scheduler.conf
	
	I0719 07:33:13.126882    8434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/admin.conf
	I0719 07:33:13.130102    8434 kubeadm.go:163] "https://control-plane.minikube.internal:51189" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0719 07:33:13.130128    8434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 07:33:13.133200    8434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/kubelet.conf
	I0719 07:33:13.135727    8434 kubeadm.go:163] "https://control-plane.minikube.internal:51189" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0719 07:33:13.135754    8434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 07:33:13.138545    8434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/controller-manager.conf
	I0719 07:33:13.141704    8434 kubeadm.go:163] "https://control-plane.minikube.internal:51189" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0719 07:33:13.141731    8434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 07:33:13.144548    8434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/scheduler.conf
	I0719 07:33:13.147022    8434 kubeadm.go:163] "https://control-plane.minikube.internal:51189" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0719 07:33:13.147041    8434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 07:33:13.150136    8434 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 07:33:13.153181    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 07:33:13.189832    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 07:33:13.835549    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 07:33:14.017228    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 07:33:14.037126    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 07:33:14.059362    8434 api_server.go:52] waiting for apiserver process to appear ...
	I0719 07:33:14.059440    8434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 07:33:14.561777    8434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 07:33:15.061528    8434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 07:33:15.561484    8434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 07:33:15.566068    8434 api_server.go:72] duration metric: took 1.506717333s to wait for apiserver process to appear ...
	I0719 07:33:15.566077    8434 api_server.go:88] waiting for apiserver healthz status ...
	I0719 07:33:15.566086    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:33:20.568195    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:33:20.568239    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:33:25.568721    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:33:25.568839    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:33:30.569644    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:33:30.569693    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:33:35.570490    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:33:35.570574    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:33:40.572080    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:33:40.572170    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:33:45.573945    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:33:45.574030    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:33:50.576296    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:33:50.576377    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:33:55.579030    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:33:55.579112    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:34:00.581813    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:34:00.581935    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:34:05.584483    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:34:05.584542    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:34:10.585993    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:34:10.586073    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:34:15.588767    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:34:15.589188    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:34:15.628590    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:34:15.628735    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:34:15.650942    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:34:15.651048    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:34:15.666132    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:34:15.666201    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:34:15.679664    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:34:15.679734    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:34:15.690713    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:34:15.690790    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:34:15.701013    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:34:15.701076    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:34:15.711265    8434 logs.go:276] 0 containers: []
	W0719 07:34:15.711275    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:34:15.711331    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:34:15.722433    8434 logs.go:276] 0 containers: []
	W0719 07:34:15.722444    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:34:15.722451    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:34:15.722457    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:34:15.759880    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:34:15.759889    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:34:15.774300    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:34:15.774312    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:34:15.791349    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:34:15.791361    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:34:15.804038    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:34:15.804052    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:34:15.820890    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:34:15.820903    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:34:15.825241    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:34:15.825250    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:34:15.895060    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:34:15.895074    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:34:15.906674    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:34:15.906688    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:34:15.918467    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:34:15.918477    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:34:15.930309    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:34:15.930323    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:34:15.942411    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:34:15.942426    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:34:15.956816    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:34:15.956828    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:34:15.971589    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:34:15.971602    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:34:15.989824    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:34:15.989836    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:34:18.519432    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:34:23.522067    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:34:23.522443    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:34:23.557733    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:34:23.557857    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:34:23.577319    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:34:23.577431    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:34:23.592064    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:34:23.592138    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:34:23.604169    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:34:23.604250    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:34:23.623462    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:34:23.623537    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:34:23.634313    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:34:23.634397    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:34:23.644943    8434 logs.go:276] 0 containers: []
	W0719 07:34:23.644954    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:34:23.645015    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:34:23.655673    8434 logs.go:276] 0 containers: []
	W0719 07:34:23.655687    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:34:23.655704    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:34:23.655709    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:34:23.666752    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:34:23.666765    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:34:23.680689    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:34:23.680708    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:34:23.695446    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:34:23.695456    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:34:23.707054    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:34:23.707065    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:34:23.721153    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:34:23.721164    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:34:23.733476    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:34:23.733486    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:34:23.758652    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:34:23.758660    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:34:23.793554    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:34:23.793561    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:34:23.829218    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:34:23.829233    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:34:23.842983    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:34:23.842996    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:34:23.847214    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:34:23.847222    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:34:23.870576    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:34:23.870589    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:34:23.888795    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:34:23.888805    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:34:23.905532    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:34:23.905543    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:34:26.418152    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:34:31.420975    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:34:31.421371    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:34:31.461388    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:34:31.461522    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:34:31.482837    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:34:31.482952    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:34:31.498057    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:34:31.498139    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:34:31.512336    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:34:31.512412    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:34:31.523287    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:34:31.523356    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:34:31.534297    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:34:31.534368    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:34:31.544657    8434 logs.go:276] 0 containers: []
	W0719 07:34:31.544667    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:34:31.544726    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:34:31.555274    8434 logs.go:276] 0 containers: []
	W0719 07:34:31.555285    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:34:31.555293    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:34:31.555299    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:34:31.590967    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:34:31.590980    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:34:31.606208    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:34:31.606220    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:34:31.632532    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:34:31.632543    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:34:31.644202    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:34:31.644213    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:34:31.662316    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:34:31.662328    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:34:31.677467    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:34:31.677476    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:34:31.689105    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:34:31.689120    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:34:31.693732    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:34:31.693740    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:34:31.733304    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:34:31.733320    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:34:31.752364    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:34:31.752373    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:34:31.766137    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:34:31.766148    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:34:31.777483    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:34:31.777494    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:34:31.795739    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:34:31.795748    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:34:31.814413    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:34:31.814431    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:34:34.331659    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:34:39.334317    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:34:39.334751    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:34:39.375028    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:34:39.375174    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:34:39.396136    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:34:39.396246    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:34:39.411912    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:34:39.411985    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:34:39.424912    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:34:39.424980    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:34:39.436674    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:34:39.436742    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:34:39.447498    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:34:39.447568    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:34:39.457925    8434 logs.go:276] 0 containers: []
	W0719 07:34:39.457936    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:34:39.457992    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:34:39.468114    8434 logs.go:276] 0 containers: []
	W0719 07:34:39.468124    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:34:39.468134    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:34:39.468140    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:34:39.479113    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:34:39.479127    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:34:39.493481    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:34:39.493493    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:34:39.509997    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:34:39.510010    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:34:39.523921    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:34:39.523933    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:34:39.535588    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:34:39.535600    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:34:39.547843    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:34:39.547857    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:34:39.565502    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:34:39.565515    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:34:39.583104    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:34:39.583116    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:34:39.598248    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:34:39.598257    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:34:39.624991    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:34:39.624998    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:34:39.659941    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:34:39.659955    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:34:39.665523    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:34:39.665532    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:34:39.681301    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:34:39.681311    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:34:39.692910    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:34:39.692920    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:34:42.231901    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:34:47.235181    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:34:47.235533    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:34:47.273479    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:34:47.273603    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:34:47.294458    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:34:47.294566    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:34:47.315260    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:34:47.315330    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:34:47.334393    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:34:47.334464    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:34:47.344978    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:34:47.345046    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:34:47.355554    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:34:47.355621    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:34:47.365812    8434 logs.go:276] 0 containers: []
	W0719 07:34:47.365825    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:34:47.365881    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:34:47.376680    8434 logs.go:276] 0 containers: []
	W0719 07:34:47.376690    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:34:47.376698    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:34:47.376703    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:34:47.393095    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:34:47.393105    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:34:47.411211    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:34:47.411223    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:34:47.422921    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:34:47.422933    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:34:47.457989    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:34:47.458023    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:34:47.470829    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:34:47.470840    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:34:47.489584    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:34:47.489595    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:34:47.494231    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:34:47.494239    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:34:47.508922    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:34:47.508932    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:34:47.520585    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:34:47.520596    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:34:47.535948    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:34:47.535959    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:34:47.550551    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:34:47.550563    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:34:47.565548    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:34:47.565560    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:34:47.590091    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:34:47.590098    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:34:47.601558    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:34:47.601571    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:34:50.139262    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:34:55.141690    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:34:55.142121    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:34:55.183612    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:34:55.183768    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:34:55.208081    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:34:55.208178    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:34:55.222262    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:34:55.222339    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:34:55.233711    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:34:55.233781    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:34:55.244140    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:34:55.244207    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:34:55.257423    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:34:55.257491    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:34:55.268265    8434 logs.go:276] 0 containers: []
	W0719 07:34:55.268276    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:34:55.268332    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:34:55.279344    8434 logs.go:276] 0 containers: []
	W0719 07:34:55.279355    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:34:55.279362    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:34:55.279368    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:34:55.293629    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:34:55.293640    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:34:55.304744    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:34:55.304758    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:34:55.316585    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:34:55.316596    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:34:55.333150    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:34:55.333162    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:34:55.370466    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:34:55.370472    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:34:55.374570    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:34:55.374575    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:34:55.410135    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:34:55.410147    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:34:55.422683    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:34:55.422692    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:34:55.440037    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:34:55.440050    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:34:55.451860    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:34:55.451871    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:34:55.468100    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:34:55.468110    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:34:55.480013    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:34:55.480024    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:34:55.494123    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:34:55.494136    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:34:55.512003    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:34:55.512016    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:34:58.038117    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:35:03.040842    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:35:03.041117    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:35:03.066242    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:35:03.066396    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:35:03.087683    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:35:03.087767    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:35:03.100183    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:35:03.100258    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:35:03.111311    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:35:03.111383    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:35:03.121651    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:35:03.121719    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:35:03.132076    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:35:03.132145    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:35:03.142010    8434 logs.go:276] 0 containers: []
	W0719 07:35:03.142025    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:35:03.142080    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:35:03.152558    8434 logs.go:276] 0 containers: []
	W0719 07:35:03.152568    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:35:03.152578    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:35:03.152583    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:35:03.166385    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:35:03.166399    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:35:03.180877    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:35:03.180887    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:35:03.191824    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:35:03.191834    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:35:03.210173    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:35:03.210183    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:35:03.222702    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:35:03.222713    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:35:03.238121    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:35:03.238132    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:35:03.264001    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:35:03.264008    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:35:03.299076    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:35:03.299089    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:35:03.310589    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:35:03.310600    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:35:03.322187    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:35:03.322197    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:35:03.333939    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:35:03.333950    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:35:03.351073    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:35:03.351083    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:35:03.386466    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:35:03.386474    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:35:03.400152    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:35:03.400163    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:35:05.906462    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:35:10.909290    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:35:10.909668    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:35:10.942680    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:35:10.942796    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:35:10.965513    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:35:10.965602    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:35:10.979254    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:35:10.979314    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:35:10.990940    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:35:10.991009    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:35:11.001424    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:35:11.001492    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:35:11.012035    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:35:11.012101    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:35:11.022168    8434 logs.go:276] 0 containers: []
	W0719 07:35:11.022181    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:35:11.022238    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:35:11.032761    8434 logs.go:276] 0 containers: []
	W0719 07:35:11.032771    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:35:11.032779    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:35:11.032784    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:35:11.070587    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:35:11.070599    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:35:11.082772    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:35:11.082788    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:35:11.095921    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:35:11.095932    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:35:11.130693    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:35:11.130707    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:35:11.143425    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:35:11.143434    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:35:11.169627    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:35:11.169637    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:35:11.184696    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:35:11.184707    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:35:11.201395    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:35:11.201411    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:35:11.219497    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:35:11.219507    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:35:11.234376    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:35:11.234385    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:35:11.239108    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:35:11.239115    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:35:11.250248    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:35:11.250261    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:35:11.263804    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:35:11.263813    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:35:11.278513    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:35:11.278528    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:35:13.791786    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:35:18.792364    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:35:18.792789    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:35:18.833257    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:35:18.833375    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:35:18.855462    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:35:18.855579    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:35:18.870923    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:35:18.870989    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:35:18.883769    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:35:18.883841    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:35:18.895304    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:35:18.895369    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:35:18.906147    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:35:18.906203    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:35:18.916158    8434 logs.go:276] 0 containers: []
	W0719 07:35:18.916170    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:35:18.916215    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:35:18.926896    8434 logs.go:276] 0 containers: []
	W0719 07:35:18.926908    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:35:18.926916    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:35:18.926922    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:35:18.962017    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:35:18.962026    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:35:18.974087    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:35:18.974099    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:35:18.985373    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:35:18.985387    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:35:19.000147    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:35:19.000159    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:35:19.037215    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:35:19.037228    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:35:19.041465    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:35:19.041473    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:35:19.053375    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:35:19.053385    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:35:19.069671    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:35:19.069682    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:35:19.086895    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:35:19.086907    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:35:19.104376    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:35:19.104388    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:35:19.129926    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:35:19.129935    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:35:19.141600    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:35:19.141612    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:35:19.164778    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:35:19.164789    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:35:19.188385    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:35:19.188396    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:35:21.711454    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:35:26.714198    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:35:26.714657    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:35:26.754503    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:35:26.754643    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:35:26.776065    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:35:26.776177    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:35:26.791235    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:35:26.791306    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:35:26.803843    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:35:26.803910    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:35:26.815146    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:35:26.815224    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:35:26.826202    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:35:26.826298    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:35:26.836799    8434 logs.go:276] 0 containers: []
	W0719 07:35:26.836810    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:35:26.836869    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:35:26.853254    8434 logs.go:276] 0 containers: []
	W0719 07:35:26.853263    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:35:26.853271    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:35:26.853277    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:35:26.865191    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:35:26.865205    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:35:26.878880    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:35:26.878893    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:35:26.896926    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:35:26.896936    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:35:26.909102    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:35:26.909115    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:35:26.928011    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:35:26.928022    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:35:26.952161    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:35:26.952171    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:35:26.987014    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:35:26.987021    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:35:26.991098    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:35:26.991106    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:35:27.005334    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:35:27.005346    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:35:27.017173    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:35:27.017184    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:35:27.028528    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:35:27.028541    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:35:27.050515    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:35:27.050526    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:35:27.086236    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:35:27.086247    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:35:27.100858    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:35:27.100869    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:35:29.615740    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:35:34.618181    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:35:34.618723    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:35:34.655094    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:35:34.655255    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:35:34.676014    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:35:34.676120    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:35:34.691125    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:35:34.691199    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:35:34.703645    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:35:34.703724    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:35:34.715423    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:35:34.715491    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:35:34.726583    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:35:34.726649    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:35:34.736919    8434 logs.go:276] 0 containers: []
	W0719 07:35:34.736933    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:35:34.736990    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:35:34.747603    8434 logs.go:276] 0 containers: []
	W0719 07:35:34.747614    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:35:34.747621    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:35:34.747629    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:35:34.763435    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:35:34.763450    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:35:34.775143    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:35:34.775156    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:35:34.789393    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:35:34.789406    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:35:34.802334    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:35:34.802348    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:35:34.807175    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:35:34.807181    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:35:34.842289    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:35:34.842300    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:35:34.858454    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:35:34.858466    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:35:34.871621    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:35:34.871632    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:35:34.883232    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:35:34.883244    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:35:34.900841    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:35:34.900851    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:35:34.915741    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:35:34.915750    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:35:34.941012    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:35:34.941020    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:35:34.958596    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:35:34.958605    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:35:34.995970    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:35:34.995983    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:35:37.512448    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:35:42.514704    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:35:42.514814    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:35:42.525963    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:35:42.526036    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:35:42.536442    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:35:42.536513    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:35:42.547122    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:35:42.547184    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:35:42.565902    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:35:42.565984    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:35:42.578692    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:35:42.578762    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:35:42.601484    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:35:42.601557    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:35:42.613109    8434 logs.go:276] 0 containers: []
	W0719 07:35:42.613124    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:35:42.613185    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:35:42.624626    8434 logs.go:276] 0 containers: []
	W0719 07:35:42.624640    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:35:42.624651    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:35:42.624658    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:35:42.629437    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:35:42.629450    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:35:42.649471    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:35:42.649491    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:35:42.663419    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:35:42.663433    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:35:42.677510    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:35:42.677522    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:35:42.720278    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:35:42.720291    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:35:42.736816    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:35:42.736831    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:35:42.755960    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:35:42.755980    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:35:42.779355    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:35:42.779369    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:35:42.799863    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:35:42.799884    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:35:42.817778    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:35:42.817791    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:35:42.858397    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:35:42.858420    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:35:42.874733    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:35:42.874746    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:35:42.888145    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:35:42.888159    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:35:42.915754    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:35:42.915769    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:35:45.431044    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:35:50.433445    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:35:50.433860    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:35:50.469577    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:35:50.469709    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:35:50.493533    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:35:50.493645    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:35:50.508630    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:35:50.508706    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:35:50.521295    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:35:50.521376    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:35:50.532990    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:35:50.533057    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:35:50.544495    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:35:50.544563    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:35:50.554952    8434 logs.go:276] 0 containers: []
	W0719 07:35:50.554963    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:35:50.555020    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:35:50.565316    8434 logs.go:276] 0 containers: []
	W0719 07:35:50.565327    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:35:50.565335    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:35:50.565342    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:35:50.600914    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:35:50.600925    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:35:50.615279    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:35:50.615290    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:35:50.627208    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:35:50.627219    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:35:50.644018    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:35:50.644029    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:35:50.658161    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:35:50.658171    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:35:50.670076    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:35:50.670088    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:35:50.688213    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:35:50.688226    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:35:50.703271    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:35:50.703281    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:35:50.714996    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:35:50.715005    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:35:50.752180    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:35:50.752188    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:35:50.756190    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:35:50.756199    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:35:50.768091    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:35:50.768102    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:35:50.780152    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:35:50.780162    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:35:50.797734    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:35:50.797744    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:35:53.325241    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:35:58.327436    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:35:58.327555    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:35:58.339076    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:35:58.339155    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:35:58.350624    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:35:58.350697    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:35:58.362151    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:35:58.362223    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:35:58.373884    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:35:58.373959    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:35:58.388303    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:35:58.388370    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:35:58.399447    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:35:58.399534    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:35:58.411048    8434 logs.go:276] 0 containers: []
	W0719 07:35:58.411060    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:35:58.411124    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:35:58.422437    8434 logs.go:276] 0 containers: []
	W0719 07:35:58.422451    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:35:58.422460    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:35:58.422497    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:35:58.427410    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:35:58.427421    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:35:58.444287    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:35:58.444305    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:35:58.459423    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:35:58.459434    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:35:58.474412    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:35:58.474424    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:35:58.490151    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:35:58.490169    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:35:58.503718    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:35:58.503729    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:35:58.542954    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:35:58.542962    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:35:58.557247    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:35:58.557258    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:35:58.576966    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:35:58.576977    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:35:58.588915    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:35:58.588931    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:35:58.613618    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:35:58.613629    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:35:58.649260    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:35:58.649275    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:35:58.664778    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:35:58.664788    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:35:58.676625    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:35:58.676636    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:36:01.203956    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:36:06.206600    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:36:06.206802    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:36:06.220221    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:36:06.220278    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:36:06.231354    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:36:06.231409    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:36:06.242813    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:36:06.242875    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:36:06.254538    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:36:06.254589    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:36:06.265683    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:36:06.265740    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:36:06.277170    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:36:06.277227    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:36:06.291810    8434 logs.go:276] 0 containers: []
	W0719 07:36:06.291820    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:36:06.291866    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:36:06.302101    8434 logs.go:276] 0 containers: []
	W0719 07:36:06.302152    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:36:06.302165    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:36:06.302171    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:36:06.307440    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:36:06.307450    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:36:06.344738    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:36:06.344754    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:36:06.356668    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:36:06.356681    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:36:06.370134    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:36:06.370150    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:36:06.383021    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:36:06.383033    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:36:06.401332    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:36:06.401344    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:36:06.439940    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:36:06.439954    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:36:06.454471    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:36:06.454481    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:36:06.471665    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:36:06.471679    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:36:06.487880    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:36:06.487898    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:36:06.505177    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:36:06.505189    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:36:06.524297    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:36:06.524316    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:36:06.540959    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:36:06.540972    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:36:06.566856    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:36:06.566882    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:36:09.087816    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:36:14.090381    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:36:14.090716    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:36:14.117832    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:36:14.117961    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:36:14.155228    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:36:14.155307    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:36:14.167264    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:36:14.167331    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:36:14.178839    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:36:14.178912    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:36:14.192981    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:36:14.193054    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:36:14.204481    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:36:14.204562    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:36:14.214683    8434 logs.go:276] 0 containers: []
	W0719 07:36:14.214696    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:36:14.214758    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:36:14.226775    8434 logs.go:276] 0 containers: []
	W0719 07:36:14.226788    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:36:14.226796    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:36:14.226802    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:36:14.231424    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:36:14.231432    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:36:14.247741    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:36:14.247753    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:36:14.259338    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:36:14.259349    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:36:14.281070    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:36:14.281085    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:36:14.306113    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:36:14.306128    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:36:14.317650    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:36:14.317660    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:36:14.329118    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:36:14.329129    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:36:14.343260    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:36:14.343271    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:36:14.359025    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:36:14.359035    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:36:14.378713    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:36:14.378723    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:36:14.412268    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:36:14.412283    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:36:14.448475    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:36:14.448485    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:36:14.470402    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:36:14.470413    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:36:14.488878    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:36:14.488892    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:36:17.003394    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:36:22.006174    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:36:22.006420    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:36:22.031939    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:36:22.032063    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:36:22.048197    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:36:22.048277    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:36:22.061844    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:36:22.061916    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:36:22.073681    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:36:22.073747    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:36:22.084130    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:36:22.084197    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:36:22.095013    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:36:22.095081    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:36:22.106361    8434 logs.go:276] 0 containers: []
	W0719 07:36:22.106373    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:36:22.106436    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:36:22.117350    8434 logs.go:276] 0 containers: []
	W0719 07:36:22.117365    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:36:22.117373    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:36:22.117379    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:36:22.154813    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:36:22.154821    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:36:22.159106    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:36:22.159113    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:36:22.170671    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:36:22.170683    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:36:22.187650    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:36:22.187661    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:36:22.205365    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:36:22.205378    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:36:22.218075    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:36:22.218086    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:36:22.256400    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:36:22.256412    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:36:22.270963    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:36:22.270977    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:36:22.285690    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:36:22.285701    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:36:22.296912    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:36:22.296922    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:36:22.316376    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:36:22.316389    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:36:22.340244    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:36:22.340251    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:36:22.353784    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:36:22.353794    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:36:22.365717    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:36:22.365728    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:36:24.882253    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:36:29.884685    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:36:29.884881    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:36:29.896935    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:36:29.896999    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:36:29.908512    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:36:29.908586    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:36:29.919475    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:36:29.919538    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:36:29.930708    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:36:29.930775    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:36:29.941371    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:36:29.941436    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:36:29.951795    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:36:29.951860    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:36:29.962669    8434 logs.go:276] 0 containers: []
	W0719 07:36:29.962678    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:36:29.962732    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:36:29.981413    8434 logs.go:276] 0 containers: []
	W0719 07:36:29.981424    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:36:29.981432    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:36:29.981438    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:36:29.999458    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:36:29.999469    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:36:30.012390    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:36:30.012400    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:36:30.024465    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:36:30.024475    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:36:30.039404    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:36:30.039414    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:36:30.051055    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:36:30.051066    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:36:30.055580    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:36:30.055591    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:36:30.090985    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:36:30.090994    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:36:30.109341    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:36:30.109352    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:36:30.123299    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:36:30.123312    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:36:30.137865    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:36:30.137875    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:36:30.155707    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:36:30.155717    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:36:30.180006    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:36:30.180012    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:36:30.191929    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:36:30.191940    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:36:30.228135    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:36:30.228145    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:36:32.741728    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:36:37.743916    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:36:37.744038    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:36:37.762311    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:36:37.762396    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:36:37.785805    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:36:37.785877    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:36:37.805652    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:36:37.805727    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:36:37.816904    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:36:37.816976    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:36:37.832002    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:36:37.832065    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:36:37.845835    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:36:37.845906    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:36:37.857394    8434 logs.go:276] 0 containers: []
	W0719 07:36:37.857406    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:36:37.857460    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:36:37.867447    8434 logs.go:276] 0 containers: []
	W0719 07:36:37.867458    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:36:37.867466    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:36:37.867471    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:36:37.883811    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:36:37.883821    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:36:37.908622    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:36:37.908633    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:36:37.943374    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:36:37.943386    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:36:37.963932    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:36:37.963942    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:36:37.975522    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:36:37.975536    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:36:37.991574    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:36:37.991587    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:36:38.006419    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:36:38.006429    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:36:38.018576    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:36:38.018586    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:36:38.056356    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:36:38.056363    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:36:38.060704    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:36:38.060710    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:36:38.072097    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:36:38.072113    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:36:38.090654    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:36:38.090665    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:36:38.109921    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:36:38.109930    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:36:38.123722    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:36:38.123734    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:36:40.639620    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:36:45.637811    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:36:45.638035    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:36:45.661001    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:36:45.661115    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:36:45.677163    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:36:45.677248    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:36:45.689697    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:36:45.689770    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:36:45.702464    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:36:45.702535    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:36:45.713452    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:36:45.713521    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:36:45.724207    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:36:45.724273    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:36:45.734686    8434 logs.go:276] 0 containers: []
	W0719 07:36:45.734697    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:36:45.734753    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:36:45.744468    8434 logs.go:276] 0 containers: []
	W0719 07:36:45.744483    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:36:45.744490    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:36:45.744496    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:36:45.781205    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:36:45.781214    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:36:45.795302    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:36:45.795311    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:36:45.812912    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:36:45.812922    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:36:45.836868    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:36:45.836879    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:36:45.841166    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:36:45.841174    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:36:45.852749    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:36:45.852764    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:36:45.867036    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:36:45.867048    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:36:45.878020    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:36:45.878031    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:36:45.890130    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:36:45.890142    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:36:45.901635    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:36:45.901646    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:36:45.920056    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:36:45.920069    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:36:45.935127    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:36:45.935136    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:36:45.970264    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:36:45.970273    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:36:45.986514    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:36:45.986523    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:36:48.500422    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:36:53.501185    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0719 07:36:53.501298    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:36:53.516693    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:36:53.516766    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:36:53.529754    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:36:53.529826    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:36:53.542223    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:36:53.542296    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:36:53.554514    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:36:53.554585    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:36:53.566528    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:36:53.566600    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:36:53.579423    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:36:53.579492    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:36:53.594726    8434 logs.go:276] 0 containers: []
	W0719 07:36:53.594740    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:36:53.594803    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:36:53.607895    8434 logs.go:276] 0 containers: []
	W0719 07:36:53.607909    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:36:53.607917    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:36:53.607923    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:36:53.647507    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:36:53.647522    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:36:53.660210    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:36:53.660224    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:36:53.677476    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:36:53.677493    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:36:53.694541    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:36:53.694554    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:36:53.715496    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:36:53.715516    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:36:53.733334    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:36:53.733355    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:36:53.746768    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:36:53.746780    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:36:53.765327    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:36:53.765341    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:36:53.789562    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:36:53.789576    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:36:53.801827    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:36:53.801838    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:36:53.839991    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:36:53.840009    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:36:53.845118    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:36:53.845127    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:36:53.859657    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:36:53.859669    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:36:53.871861    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:36:53.871873    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:36:56.387363    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:01.388749    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:01.388914    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:37:01.406120    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:37:01.406210    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:37:01.419651    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:37:01.419726    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:37:01.431106    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:37:01.431172    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:37:01.441836    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:37:01.441908    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:37:01.452155    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:37:01.452220    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:37:01.462704    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:37:01.462759    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:37:01.473325    8434 logs.go:276] 0 containers: []
	W0719 07:37:01.473339    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:37:01.473396    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:37:01.484927    8434 logs.go:276] 0 containers: []
	W0719 07:37:01.484937    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:37:01.484944    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:37:01.484950    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:37:01.523994    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:37:01.524003    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:37:01.537934    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:37:01.537947    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:37:01.549458    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:37:01.549470    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:37:01.563071    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:37:01.563081    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:37:01.582711    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:37:01.582725    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:37:01.600050    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:37:01.600063    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:37:01.618120    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:37:01.618133    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:37:01.632450    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:37:01.632462    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:37:01.643511    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:37:01.643523    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:37:01.659125    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:37:01.659138    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:37:01.671041    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:37:01.671052    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:37:01.675633    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:37:01.675641    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:37:01.712245    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:37:01.712257    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:37:01.724310    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:37:01.724320    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:37:04.247310    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:09.248987    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:09.249155    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:37:09.261292    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:37:09.261376    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:37:09.272761    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:37:09.272834    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:37:09.285634    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:37:09.285708    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:37:09.300496    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:37:09.300570    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:37:09.312342    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:37:09.312407    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:37:09.326555    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:37:09.326626    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:37:09.336968    8434 logs.go:276] 0 containers: []
	W0719 07:37:09.336979    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:37:09.337035    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:37:09.348064    8434 logs.go:276] 0 containers: []
	W0719 07:37:09.348075    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:37:09.348083    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:37:09.348088    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:37:09.359344    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:37:09.359355    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:37:09.376438    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:37:09.376454    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:37:09.389043    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:37:09.389054    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:37:09.393568    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:37:09.393575    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:37:09.429658    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:37:09.429675    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:37:09.443300    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:37:09.443315    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:37:09.455226    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:37:09.455238    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:37:09.470530    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:37:09.470543    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:37:09.485783    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:37:09.485796    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:37:09.504157    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:37:09.504166    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:37:09.519764    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:37:09.519780    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:37:09.532471    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:37:09.532486    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:37:09.567509    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:37:09.567518    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:37:09.581334    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:37:09.581345    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:37:12.107504    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:17.109400    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:17.109453    8434 kubeadm.go:597] duration metric: took 4m5.016966625s to restartPrimaryControlPlane
	W0719 07:37:17.109504    8434 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 07:37:17.109527    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0719 07:37:18.045159    8434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 07:37:18.050064    8434 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 07:37:18.052905    8434 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 07:37:18.055536    8434 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 07:37:18.055543    8434 kubeadm.go:157] found existing configuration files:
	
	I0719 07:37:18.055566    8434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/admin.conf
	I0719 07:37:18.058657    8434 kubeadm.go:163] "https://control-plane.minikube.internal:51189" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 07:37:18.058682    8434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 07:37:18.061888    8434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/kubelet.conf
	I0719 07:37:18.064390    8434 kubeadm.go:163] "https://control-plane.minikube.internal:51189" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 07:37:18.064412    8434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 07:37:18.067023    8434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/controller-manager.conf
	I0719 07:37:18.070187    8434 kubeadm.go:163] "https://control-plane.minikube.internal:51189" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 07:37:18.070212    8434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 07:37:18.072981    8434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/scheduler.conf
	I0719 07:37:18.075363    8434 kubeadm.go:163] "https://control-plane.minikube.internal:51189" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 07:37:18.075383    8434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 07:37:18.078477    8434 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 07:37:18.095386    8434 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0719 07:37:18.095446    8434 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 07:37:18.143796    8434 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 07:37:18.143854    8434 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 07:37:18.143909    8434 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 07:37:18.195393    8434 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 07:37:18.204627    8434 out.go:204]   - Generating certificates and keys ...
	I0719 07:37:18.204661    8434 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 07:37:18.204697    8434 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 07:37:18.204772    8434 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 07:37:18.204822    8434 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 07:37:18.204878    8434 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 07:37:18.204933    8434 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 07:37:18.204971    8434 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 07:37:18.205007    8434 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 07:37:18.205055    8434 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 07:37:18.205100    8434 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 07:37:18.205131    8434 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 07:37:18.205159    8434 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 07:37:18.555124    8434 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 07:37:18.608243    8434 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 07:37:18.662664    8434 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 07:37:18.691156    8434 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 07:37:18.721204    8434 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 07:37:18.721524    8434 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 07:37:18.721665    8434 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 07:37:18.795327    8434 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 07:37:18.799523    8434 out.go:204]   - Booting up control plane ...
	I0719 07:37:18.799567    8434 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 07:37:18.800252    8434 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 07:37:18.800626    8434 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 07:37:18.800872    8434 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 07:37:18.801633    8434 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 07:37:23.303221    8434 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501344 seconds
	I0719 07:37:23.303301    8434 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 07:37:23.307162    8434 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 07:37:23.817663    8434 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 07:37:23.817770    8434 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-059000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 07:37:24.323407    8434 kubeadm.go:310] [bootstrap-token] Using token: nv695p.iid5xnr7pfj6tlwc
	I0719 07:37:24.329675    8434 out.go:204]   - Configuring RBAC rules ...
	I0719 07:37:24.329769    8434 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 07:37:24.329852    8434 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 07:37:24.333473    8434 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 07:37:24.335183    8434 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 07:37:24.336808    8434 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 07:37:24.338140    8434 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 07:37:24.341930    8434 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 07:37:24.494635    8434 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 07:37:24.728277    8434 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 07:37:24.728796    8434 kubeadm.go:310] 
	I0719 07:37:24.728825    8434 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 07:37:24.728830    8434 kubeadm.go:310] 
	I0719 07:37:24.728873    8434 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 07:37:24.728880    8434 kubeadm.go:310] 
	I0719 07:37:24.728898    8434 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 07:37:24.728937    8434 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 07:37:24.728976    8434 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 07:37:24.728980    8434 kubeadm.go:310] 
	I0719 07:37:24.729012    8434 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 07:37:24.729016    8434 kubeadm.go:310] 
	I0719 07:37:24.729040    8434 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 07:37:24.729043    8434 kubeadm.go:310] 
	I0719 07:37:24.729070    8434 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 07:37:24.729126    8434 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 07:37:24.729180    8434 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 07:37:24.729186    8434 kubeadm.go:310] 
	I0719 07:37:24.729238    8434 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 07:37:24.729284    8434 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 07:37:24.729291    8434 kubeadm.go:310] 
	I0719 07:37:24.729351    8434 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nv695p.iid5xnr7pfj6tlwc \
	I0719 07:37:24.729415    8434 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c0079416ee672a46ea5c9a53cd13d3e504fe5042c2b22c9e2bf67c89ce7740e7 \
	I0719 07:37:24.729429    8434 kubeadm.go:310] 	--control-plane 
	I0719 07:37:24.729432    8434 kubeadm.go:310] 
	I0719 07:37:24.729480    8434 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 07:37:24.729484    8434 kubeadm.go:310] 
	I0719 07:37:24.729529    8434 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nv695p.iid5xnr7pfj6tlwc \
	I0719 07:37:24.729585    8434 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c0079416ee672a46ea5c9a53cd13d3e504fe5042c2b22c9e2bf67c89ce7740e7 
	I0719 07:37:24.729681    8434 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 07:37:24.729691    8434 cni.go:84] Creating CNI manager for ""
	I0719 07:37:24.729700    8434 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:37:24.734199    8434 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 07:37:24.744155    8434 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 07:37:24.747130    8434 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 07:37:24.752013    8434 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 07:37:24.752079    8434 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-059000 minikube.k8s.io/updated_at=2024_07_19T07_37_24_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de minikube.k8s.io/name=running-upgrade-059000 minikube.k8s.io/primary=true
	I0719 07:37:24.752123    8434 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 07:37:24.793559    8434 ops.go:34] apiserver oom_adj: -16
	I0719 07:37:24.793572    8434 kubeadm.go:1113] duration metric: took 41.526209ms to wait for elevateKubeSystemPrivileges
	I0719 07:37:24.793582    8434 kubeadm.go:394] duration metric: took 4m12.715741083s to StartCluster
	I0719 07:37:24.793595    8434 settings.go:142] acquiring lock: {Name:mk67df71d562cbffe9f3bde88489898c395cdfc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:37:24.793771    8434 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:37:24.794162    8434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/kubeconfig: {Name:mk0c17b3830610cdae4c834f6bae9631cabc7388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:37:24.794368    8434 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:37:24.794374    8434 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 07:37:24.794432    8434 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-059000"
	I0719 07:37:24.794441    8434 config.go:182] Loaded profile config "running-upgrade-059000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:37:24.794454    8434 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-059000"
	W0719 07:37:24.794459    8434 addons.go:243] addon storage-provisioner should already be in state true
	I0719 07:37:24.794471    8434 host.go:66] Checking if "running-upgrade-059000" exists ...
	I0719 07:37:24.794496    8434 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-059000"
	I0719 07:37:24.794509    8434 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-059000"
	I0719 07:37:24.799635    8434 out.go:177] * Verifying Kubernetes components...
	I0719 07:37:24.801113    8434 kapi.go:59] client config for running-upgrade-059000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-5980/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106557790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 07:37:24.802328    8434 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-059000"
	W0719 07:37:24.802335    8434 addons.go:243] addon default-storageclass should already be in state true
	I0719 07:37:24.802344    8434 host.go:66] Checking if "running-upgrade-059000" exists ...
	I0719 07:37:24.802927    8434 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 07:37:24.802932    8434 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 07:37:24.802938    8434 sshutil.go:53] new ssh client: &{IP:localhost Port:51157 SSHKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/running-upgrade-059000/id_rsa Username:docker}
	I0719 07:37:24.806123    8434 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 07:37:24.806234    8434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 07:37:24.809156    8434 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 07:37:24.809161    8434 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 07:37:24.809167    8434 sshutil.go:53] new ssh client: &{IP:localhost Port:51157 SSHKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/running-upgrade-059000/id_rsa Username:docker}
	I0719 07:37:24.877133    8434 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 07:37:24.882256    8434 api_server.go:52] waiting for apiserver process to appear ...
	I0719 07:37:24.882304    8434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 07:37:24.888361    8434 api_server.go:72] duration metric: took 93.983459ms to wait for apiserver process to appear ...
	I0719 07:37:24.888372    8434 api_server.go:88] waiting for apiserver healthz status ...
	I0719 07:37:24.888380    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:24.893456    8434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 07:37:24.940075    8434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 07:37:29.890346    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:29.890389    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:34.890563    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:34.890588    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:39.890809    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:39.890834    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:44.891173    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:44.891223    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:49.892068    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:49.892111    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:54.892862    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:54.892905    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0719 07:37:55.218832    8434 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0719 07:37:55.224220    8434 out.go:177] * Enabled addons: storage-provisioner
	I0719 07:37:55.232115    8434 addons.go:510] duration metric: took 30.438460292s for enable addons: enabled=[storage-provisioner]
	I0719 07:37:59.893617    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:59.893715    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:04.895173    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:04.895212    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:09.896433    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:09.896449    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:14.898328    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:14.898369    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:19.900225    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:19.900265    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:24.902485    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:24.902696    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:24.919987    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:38:24.920067    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:24.947561    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:38:24.947630    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:24.959488    8434 logs.go:276] 2 containers: [8cc60ed45693 ed5753ac5bdd]
	I0719 07:38:24.959544    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:24.973377    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:38:24.973447    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:24.984191    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:38:24.984261    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:24.994511    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:38:24.994576    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:25.004694    8434 logs.go:276] 0 containers: []
	W0719 07:38:25.004704    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:25.004759    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:25.014957    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:38:25.014976    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:38:25.014983    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:25.029130    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:25.029141    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:25.063547    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:25.063555    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:25.068149    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:25.068156    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:25.104236    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:38:25.104246    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:38:25.119061    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:38:25.119076    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:38:25.131687    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:25.131698    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:25.156949    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:38:25.156959    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:38:25.172596    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:38:25.172607    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:38:25.186770    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:38:25.186780    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:38:25.198352    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:38:25.198362    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:38:25.209982    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:38:25.209995    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:38:25.228297    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:38:25.228307    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:38:27.742093    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:32.744436    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:32.744619    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:32.755340    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:38:32.755424    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:32.765803    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:38:32.765871    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:32.776593    8434 logs.go:276] 2 containers: [8cc60ed45693 ed5753ac5bdd]
	I0719 07:38:32.776659    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:32.786979    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:38:32.787048    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:32.798028    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:38:32.798095    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:32.808534    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:38:32.808599    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:32.818571    8434 logs.go:276] 0 containers: []
	W0719 07:38:32.818582    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:32.818638    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:32.832971    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:38:32.832987    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:38:32.832993    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:38:32.844668    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:38:32.844681    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:38:32.859317    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:38:32.859329    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:38:32.871334    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:32.871345    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:32.895504    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:32.895511    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:32.929960    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:32.929969    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:32.969408    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:38:32.969417    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:38:32.983575    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:38:32.983585    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:38:33.001307    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:38:33.001317    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:38:33.012799    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:38:33.012812    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:33.024005    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:33.024016    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:33.028452    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:38:33.028461    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:38:33.042903    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:38:33.042915    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:38:35.556830    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:40.557390    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:40.557612    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:40.580640    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:38:40.580732    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:40.598235    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:38:40.598313    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:40.614439    8434 logs.go:276] 2 containers: [8cc60ed45693 ed5753ac5bdd]
	I0719 07:38:40.614505    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:40.625209    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:38:40.625274    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:40.635850    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:38:40.635920    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:40.646298    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:38:40.646359    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:40.656465    8434 logs.go:276] 0 containers: []
	W0719 07:38:40.656476    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:40.656529    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:40.667073    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:38:40.667088    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:40.667093    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:40.702014    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:40.702025    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:40.707029    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:38:40.707039    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:38:40.721934    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:38:40.721946    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:38:40.736477    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:38:40.736489    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:38:40.747804    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:38:40.747815    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:38:40.766190    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:40.766200    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:40.790294    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:40.790304    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:40.825178    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:38:40.825193    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:38:40.839001    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:38:40.839011    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:38:40.850892    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:38:40.850905    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:38:40.862181    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:38:40.862196    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:38:40.876899    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:38:40.876910    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:43.388596    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:48.389194    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:48.389415    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:48.408433    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:38:48.408525    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:48.422266    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:38:48.422333    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:48.434474    8434 logs.go:276] 2 containers: [8cc60ed45693 ed5753ac5bdd]
	I0719 07:38:48.434539    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:48.444910    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:38:48.444979    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:48.455225    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:38:48.455291    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:48.469045    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:38:48.469115    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:48.480051    8434 logs.go:276] 0 containers: []
	W0719 07:38:48.480062    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:48.480121    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:48.490819    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:38:48.490834    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:38:48.490840    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:38:48.502631    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:38:48.502642    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:38:48.514566    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:38:48.514577    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:48.526365    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:48.526376    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:48.560523    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:48.560538    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:48.565075    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:38:48.565083    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:38:48.579789    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:38:48.579800    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:38:48.594554    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:38:48.594565    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:38:48.606501    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:48.606516    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:48.640910    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:38:48.640922    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:38:48.652919    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:38:48.652932    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:38:48.667452    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:38:48.667463    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:38:48.684852    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:48.684864    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:51.211907    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:56.214143    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:56.214394    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:56.237358    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:38:56.237455    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:56.252642    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:38:56.252724    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:56.264711    8434 logs.go:276] 2 containers: [8cc60ed45693 ed5753ac5bdd]
	I0719 07:38:56.264781    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:56.275064    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:38:56.275127    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:56.285472    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:38:56.285543    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:56.297827    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:38:56.297886    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:56.308676    8434 logs.go:276] 0 containers: []
	W0719 07:38:56.308687    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:56.308741    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:56.319519    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:38:56.319539    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:56.319545    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:56.353269    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:56.353278    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:56.387324    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:38:56.387335    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:38:56.399426    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:38:56.399440    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:38:56.411465    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:38:56.411476    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:38:56.428613    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:38:56.428623    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:38:56.443495    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:56.443506    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:56.466915    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:56.466924    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:56.471516    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:38:56.471521    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:38:56.485789    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:38:56.485799    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:38:56.500343    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:38:56.500355    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:38:56.511884    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:38:56.511894    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:38:56.525820    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:38:56.525833    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:59.039937    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:04.042266    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:04.042523    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:04.075361    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:39:04.075465    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:04.091905    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:39:04.091986    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:04.105516    8434 logs.go:276] 2 containers: [8cc60ed45693 ed5753ac5bdd]
	I0719 07:39:04.105593    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:04.116933    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:39:04.116998    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:04.127526    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:39:04.127595    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:04.138080    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:39:04.138148    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:04.148207    8434 logs.go:276] 0 containers: []
	W0719 07:39:04.148235    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:04.148310    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:04.158499    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:39:04.158515    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:04.158522    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:04.193297    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:04.193306    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:04.198238    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:39:04.198244    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:39:04.209968    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:39:04.209980    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:39:04.228224    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:04.228236    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:04.252921    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:39:04.252933    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:04.265034    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:04.265045    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:04.302011    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:39:04.302022    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:39:04.317247    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:39:04.317261    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:39:04.331398    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:39:04.331411    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:39:04.343383    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:39:04.343394    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:39:04.358932    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:39:04.358942    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:39:04.374251    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:39:04.374262    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:39:06.888112    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:11.889021    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:11.889362    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:11.924051    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:39:11.924186    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:11.945771    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:39:11.945885    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:11.960946    8434 logs.go:276] 2 containers: [8cc60ed45693 ed5753ac5bdd]
	I0719 07:39:11.961027    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:11.973251    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:39:11.973316    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:11.984950    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:39:11.985024    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:11.997539    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:39:11.997609    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:12.008017    8434 logs.go:276] 0 containers: []
	W0719 07:39:12.008030    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:12.008088    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:12.018875    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:39:12.018888    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:39:12.018894    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:39:12.033925    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:39:12.033935    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:39:12.046870    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:39:12.046880    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:39:12.065635    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:39:12.065645    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:39:12.082218    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:39:12.082228    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:39:12.102767    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:39:12.102779    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:39:12.114694    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:12.114704    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:12.150006    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:12.150016    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:12.184775    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:12.184785    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:12.209523    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:39:12.209531    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:39:12.222299    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:39:12.222309    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:12.233544    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:12.233555    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:12.238469    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:39:12.238478    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:39:14.754802    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:19.757206    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:19.757548    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:19.797264    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:39:19.797364    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:19.813403    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:39:19.813486    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:19.827057    8434 logs.go:276] 2 containers: [8cc60ed45693 ed5753ac5bdd]
	I0719 07:39:19.827130    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:19.838018    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:39:19.838089    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:19.849120    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:39:19.849185    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:19.859900    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:39:19.859969    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:19.870299    8434 logs.go:276] 0 containers: []
	W0719 07:39:19.870313    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:19.870379    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:19.882023    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:39:19.882037    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:39:19.882044    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:39:19.896955    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:39:19.896968    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:39:19.910598    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:39:19.910608    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:39:19.922894    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:39:19.922905    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:39:19.939412    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:39:19.939424    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:39:19.959260    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:19.959269    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:19.984140    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:39:19.984148    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:19.996174    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:19.996184    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:20.031021    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:20.031029    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:20.035492    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:20.035498    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:20.076332    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:39:20.076342    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:39:20.091117    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:39:20.091127    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:39:20.105565    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:39:20.105577    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:39:22.620160    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:27.622401    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:27.622558    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:27.636562    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:39:27.636637    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:27.653827    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:39:27.653889    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:27.664286    8434 logs.go:276] 2 containers: [8cc60ed45693 ed5753ac5bdd]
	I0719 07:39:27.664351    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:27.674834    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:39:27.674897    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:27.685277    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:39:27.685344    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:27.700242    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:39:27.700306    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:27.710663    8434 logs.go:276] 0 containers: []
	W0719 07:39:27.710677    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:27.710726    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:27.721309    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:39:27.721325    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:27.721331    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:27.757919    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:39:27.757931    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:39:27.774956    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:39:27.774970    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:39:27.791923    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:39:27.791934    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:39:27.803498    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:39:27.803507    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:27.815008    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:39:27.815022    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:39:27.826503    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:27.826514    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:27.851300    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:27.851308    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:27.855481    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:27.855487    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:27.894930    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:39:27.894941    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:39:27.909450    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:39:27.909462    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:39:27.927078    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:39:27.927088    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:39:27.938721    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:39:27.938733    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:39:30.452639    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:35.454999    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:35.455256    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:35.484482    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:39:35.484604    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:35.502282    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:39:35.502367    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:35.516601    8434 logs.go:276] 2 containers: [8cc60ed45693 ed5753ac5bdd]
	I0719 07:39:35.516679    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:35.527875    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:39:35.527940    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:35.538473    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:39:35.538538    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:35.548864    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:39:35.548927    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:35.559280    8434 logs.go:276] 0 containers: []
	W0719 07:39:35.559291    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:35.559359    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:35.574005    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:39:35.574021    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:39:35.574026    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:39:35.591600    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:39:35.591611    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:39:35.603209    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:39:35.603222    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:35.615197    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:35.615208    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:35.650412    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:39:35.650423    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:39:35.662461    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:39:35.662471    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:39:35.673983    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:39:35.673994    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:39:35.687747    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:39:35.687757    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:39:35.700141    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:39:35.700151    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:39:35.714469    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:35.714479    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:35.739314    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:35.739321    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:35.773655    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:35.773663    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:35.778072    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:39:35.778078    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:39:38.300197    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:43.302518    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:43.302873    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:43.340636    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:39:43.340767    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:43.361058    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:39:43.361145    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:43.376104    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:39:43.376184    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:43.390178    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:39:43.390249    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:43.404279    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:39:43.404353    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:43.414885    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:39:43.414957    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:43.425488    8434 logs.go:276] 0 containers: []
	W0719 07:39:43.425504    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:43.425568    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:43.436652    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:39:43.436668    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:43.436673    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:43.441138    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:39:43.441146    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:39:43.452740    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:43.452751    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:43.488136    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:39:43.488144    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:39:43.500196    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:39:43.500208    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:39:43.512037    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:39:43.512048    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:39:43.527593    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:39:43.527603    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:39:43.541310    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:39:43.541321    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:39:43.552866    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:39:43.552876    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:43.564612    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:43.564623    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:43.602333    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:39:43.602343    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:39:43.613443    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:39:43.613457    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:39:43.627843    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:39:43.627852    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:39:43.648810    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:39:43.648820    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:39:43.660813    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:43.660824    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:46.186258    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:51.188512    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:51.188736    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:51.211249    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:39:51.211371    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:51.226364    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:39:51.226446    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:51.238899    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:39:51.238975    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:51.252022    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:39:51.252091    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:51.263077    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:39:51.263142    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:51.277719    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:39:51.277792    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:51.288423    8434 logs.go:276] 0 containers: []
	W0719 07:39:51.288433    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:51.288487    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:51.299683    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:39:51.299699    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:51.299705    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:51.325053    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:39:51.325061    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:39:51.336211    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:39:51.336222    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:39:51.350883    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:39:51.350895    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:39:51.362918    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:39:51.362929    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:51.375935    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:51.375948    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:51.410953    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:39:51.410962    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:39:51.429379    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:39:51.429392    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:39:51.441459    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:39:51.441471    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:39:51.455650    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:39:51.455664    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:39:51.467844    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:39:51.467858    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:39:51.479228    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:39:51.479238    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:39:51.491382    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:39:51.491392    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:39:51.508389    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:51.508399    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:51.513009    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:51.513014    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:54.047811    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:59.050074    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:59.050284    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:59.065066    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:39:59.065145    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:59.076726    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:39:59.076801    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:59.088901    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:39:59.088981    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:59.099381    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:39:59.099450    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:59.110204    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:39:59.110271    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:59.121027    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:39:59.121097    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:59.131209    8434 logs.go:276] 0 containers: []
	W0719 07:39:59.131225    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:59.131282    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:59.142209    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:39:59.142229    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:59.142236    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:59.146865    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:59.146874    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:59.185742    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:39:59.185756    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:39:59.201297    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:39:59.201310    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:39:59.216107    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:39:59.216117    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:39:59.227846    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:59.227857    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:59.252768    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:59.252777    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:59.287005    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:39:59.287015    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:39:59.298418    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:39:59.298428    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:39:59.310279    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:39:59.310289    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:39:59.322070    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:39:59.322080    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:39:59.334164    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:39:59.334176    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:39:59.348284    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:39:59.348295    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:39:59.362363    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:39:59.362372    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:39:59.379633    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:39:59.379643    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:01.891852    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:06.892599    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:06.892895    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:06.921605    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:40:06.921754    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:06.940707    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:40:06.940800    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:06.962385    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:40:06.962448    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:06.974282    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:40:06.974351    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:06.985198    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:40:06.985264    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:06.996461    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:40:06.996529    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:07.006389    8434 logs.go:276] 0 containers: []
	W0719 07:40:07.006401    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:07.006455    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:07.016780    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:40:07.016799    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:07.016803    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:07.053346    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:40:07.053356    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:40:07.067850    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:40:07.067862    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:40:07.079519    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:40:07.079530    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:40:07.109533    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:07.109543    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:07.145360    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:40:07.145371    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:40:07.159979    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:40:07.159993    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:40:07.171610    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:40:07.171620    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:40:07.183459    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:07.183468    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:07.188000    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:40:07.188006    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:40:07.199830    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:40:07.199841    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:40:07.214457    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:07.214471    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:07.238862    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:40:07.238869    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:40:07.250549    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:40:07.250563    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:40:07.262558    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:40:07.262567    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:09.776646    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:14.779143    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:14.779246    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:14.793470    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:40:14.793550    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:14.805509    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:40:14.805578    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:14.818778    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:40:14.818851    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:14.829058    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:40:14.829128    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:14.839376    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:40:14.839441    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:14.850029    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:40:14.850100    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:14.860342    8434 logs.go:276] 0 containers: []
	W0719 07:40:14.860356    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:14.860409    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:14.874700    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:40:14.874718    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:40:14.874725    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:14.886193    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:14.886205    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:14.920254    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:40:14.920264    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:40:14.934058    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:40:14.934068    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:40:14.946125    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:40:14.946136    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:40:14.958044    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:40:14.958053    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:40:14.970932    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:40:14.970943    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:40:14.985881    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:40:14.985893    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:40:14.998029    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:14.998038    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:15.021647    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:15.021655    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:15.026503    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:40:15.026512    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:40:15.041238    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:40:15.041248    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:40:15.052744    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:15.052753    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:15.085870    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:40:15.085883    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:40:15.100808    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:40:15.100819    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:40:17.620013    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:22.622296    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:22.622468    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:22.636926    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:40:22.637005    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:22.648956    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:40:22.649026    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:22.659809    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:40:22.659885    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:22.670241    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:40:22.670309    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:22.680128    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:40:22.680195    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:22.690108    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:40:22.690179    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:22.700247    8434 logs.go:276] 0 containers: []
	W0719 07:40:22.700260    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:22.700311    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:22.710342    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:40:22.710359    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:40:22.710364    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:40:22.721511    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:22.721524    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:22.747122    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:22.747137    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:22.809718    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:40:22.809731    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:40:22.824730    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:40:22.824740    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:40:22.836656    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:40:22.836667    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:40:22.851823    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:40:22.851834    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:40:22.869163    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:22.869172    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:22.873635    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:40:22.873645    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:40:22.890749    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:40:22.890759    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:40:22.902888    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:40:22.902902    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:22.916622    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:22.916633    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:22.950085    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:40:22.950096    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:40:22.964461    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:40:22.964471    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:40:22.976633    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:40:22.976643    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:40:25.490712    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:30.493476    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:30.493855    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:30.526286    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:40:30.526411    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:30.546447    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:40:30.546540    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:30.564059    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:40:30.564131    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:30.585076    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:40:30.585144    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:30.595731    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:40:30.595804    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:30.606583    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:40:30.606650    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:30.617209    8434 logs.go:276] 0 containers: []
	W0719 07:40:30.617224    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:30.617277    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:30.627810    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:40:30.627827    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:40:30.627833    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:40:30.645516    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:30.645526    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:30.670593    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:40:30.670602    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:40:30.682325    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:40:30.682339    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:40:30.697301    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:40:30.697314    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:40:30.711036    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:40:30.711047    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:40:30.722106    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:40:30.722116    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:40:30.737514    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:30.737525    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:30.742465    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:40:30.742471    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:40:30.754504    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:40:30.754517    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:40:30.766215    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:30.766226    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:30.799538    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:40:30.799547    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:40:30.813758    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:40:30.813771    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:40:30.828255    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:40:30.828266    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:30.841670    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:30.841679    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:33.379152    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:38.381454    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:38.381634    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:38.396912    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:40:38.396993    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:38.409129    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:40:38.409197    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:38.423537    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:40:38.423621    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:38.434250    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:40:38.434313    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:38.444909    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:40:38.444974    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:38.456360    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:40:38.456425    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:38.466758    8434 logs.go:276] 0 containers: []
	W0719 07:40:38.466769    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:38.466821    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:38.477533    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:40:38.477549    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:40:38.477554    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:40:38.492988    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:40:38.493005    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:40:38.510468    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:40:38.510482    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:40:38.523490    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:38.523499    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:38.528055    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:38.528063    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:38.563019    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:40:38.563032    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:40:38.577525    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:40:38.577537    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:40:38.589558    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:38.589572    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:38.614282    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:40:38.614289    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:40:38.628069    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:40:38.628079    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:40:38.643549    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:40:38.643558    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:40:38.659029    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:40:38.659039    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:40:38.670982    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:38.670992    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:38.704209    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:40:38.704219    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:40:38.718914    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:40:38.718927    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:41.232679    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:46.235207    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:46.235713    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:46.273545    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:40:46.273684    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:46.295093    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:40:46.295181    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:46.312382    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:40:46.312458    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:46.324828    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:40:46.324896    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:46.337855    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:40:46.337922    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:46.350363    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:40:46.350437    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:46.361445    8434 logs.go:276] 0 containers: []
	W0719 07:40:46.361455    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:46.361511    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:46.372654    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:40:46.372673    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:46.372678    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:46.408044    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:40:46.408054    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:40:46.421301    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:40:46.421311    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:40:46.433718    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:40:46.433728    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:40:46.448682    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:46.448692    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:46.482677    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:40:46.482688    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:40:46.497261    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:40:46.497274    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:40:46.514669    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:40:46.514683    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:40:46.529325    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:46.529335    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:46.557394    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:40:46.557405    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:46.568941    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:46.568956    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:46.573279    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:40:46.573287    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:40:46.586660    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:40:46.586674    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:40:46.598751    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:40:46.598760    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:40:46.616787    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:40:46.616801    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:40:49.130806    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:54.132918    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:54.133110    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:54.151097    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:40:54.151189    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:54.164402    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:40:54.164464    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:54.176201    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:40:54.176272    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:54.186363    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:40:54.186434    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:54.196957    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:40:54.197018    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:54.207719    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:40:54.207787    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:54.217772    8434 logs.go:276] 0 containers: []
	W0719 07:40:54.217781    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:54.217830    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:54.228057    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:40:54.228082    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:40:54.228087    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:40:54.246250    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:40:54.246261    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:40:54.258014    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:54.258024    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:54.282541    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:54.282549    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:54.286841    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:54.286847    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:54.323533    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:40:54.323546    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:40:54.338732    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:40:54.338744    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:40:54.353901    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:40:54.353911    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:40:54.365502    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:54.365512    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:54.398371    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:40:54.398382    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:40:54.409589    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:40:54.409599    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:40:54.421182    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:40:54.421191    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:40:54.432394    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:40:54.432408    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:54.444548    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:40:54.444564    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:40:54.455904    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:40:54.455915    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:40:56.975700    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:01.977962    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:01.978209    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:41:01.991774    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:41:01.991856    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:41:02.004776    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:41:02.004846    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:41:02.022234    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:41:02.022309    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:41:02.033653    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:41:02.033714    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:41:02.056377    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:41:02.056446    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:41:02.075545    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:41:02.075626    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:41:02.091214    8434 logs.go:276] 0 containers: []
	W0719 07:41:02.091226    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:41:02.091287    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:41:02.105511    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:41:02.105530    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:41:02.105535    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:41:02.141443    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:41:02.141457    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:41:02.153453    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:41:02.153467    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:41:02.168697    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:41:02.168707    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:41:02.180618    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:41:02.180627    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:41:02.215481    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:41:02.215488    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:41:02.219961    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:41:02.219969    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:41:02.231124    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:41:02.231139    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:41:02.246817    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:41:02.246827    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:41:02.265164    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:41:02.265174    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:41:02.276714    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:41:02.276724    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:41:02.291070    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:41:02.291079    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:41:02.302735    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:41:02.302746    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:41:02.314380    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:41:02.314391    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:41:02.332160    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:41:02.332169    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:41:04.857668    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:09.859222    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:09.859468    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:41:09.884917    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:41:09.885033    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:41:09.901651    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:41:09.901738    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:41:09.915237    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:41:09.915303    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:41:09.929628    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:41:09.929699    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:41:09.941343    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:41:09.941410    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:41:09.951951    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:41:09.952019    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:41:09.961628    8434 logs.go:276] 0 containers: []
	W0719 07:41:09.961638    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:41:09.961695    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:41:09.972291    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:41:09.972307    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:41:09.972312    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:41:09.976750    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:41:09.976759    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:41:10.010903    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:41:10.010918    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:41:10.025262    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:41:10.025277    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:41:10.042847    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:41:10.042860    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:41:10.057997    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:41:10.058010    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:41:10.082979    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:41:10.082986    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:41:10.116556    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:41:10.116566    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:41:10.130897    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:41:10.130907    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:41:10.143094    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:41:10.143108    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:41:10.155274    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:41:10.155287    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:41:10.172547    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:41:10.172557    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:41:10.184132    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:41:10.184145    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:41:10.195296    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:41:10.195309    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:41:10.207126    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:41:10.207136    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:41:12.722565    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:17.724782    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:17.724925    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:41:17.736959    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:41:17.737043    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:41:17.749904    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:41:17.749970    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:41:17.762100    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:41:17.762178    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:41:17.781900    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:41:17.781976    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:41:17.793669    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:41:17.793739    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:41:17.805531    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:41:17.805605    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:41:17.816873    8434 logs.go:276] 0 containers: []
	W0719 07:41:17.816886    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:41:17.816948    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:41:17.827885    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:41:17.827903    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:41:17.827912    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:41:17.832830    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:41:17.832842    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:41:17.845939    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:41:17.845951    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:41:17.862616    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:41:17.862627    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:41:17.876038    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:41:17.876052    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:41:17.888984    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:41:17.888996    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:41:17.901503    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:41:17.901514    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:41:17.937551    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:41:17.937568    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:41:17.954115    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:41:17.954127    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:41:17.968066    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:41:17.968080    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:41:17.984708    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:41:17.984722    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:41:18.025133    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:41:18.025146    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:41:18.039048    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:41:18.039061    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:41:18.060193    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:41:18.060207    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:41:18.087209    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:41:18.087235    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:41:20.606006    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:25.608367    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:25.613085    8434 out.go:177] 
	W0719 07:41:25.617913    8434 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0719 07:41:25.617931    8434 out.go:239] * 
	* 
	W0719 07:41:25.619236    8434 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:41:25.633920    8434 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-059000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-19 07:41:25.741262 -0700 PDT m=+1315.031058084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-059000 -n running-upgrade-059000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-059000 -n running-upgrade-059000: exit status 2 (15.580414333s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-059000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-710000          | force-systemd-flag-710000 | jenkins | v1.33.1 | 19 Jul 24 07:31 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-580000              | force-systemd-env-580000  | jenkins | v1.33.1 | 19 Jul 24 07:31 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-580000           | force-systemd-env-580000  | jenkins | v1.33.1 | 19 Jul 24 07:31 PDT | 19 Jul 24 07:31 PDT |
	| start   | -p docker-flags-033000                | docker-flags-033000       | jenkins | v1.33.1 | 19 Jul 24 07:31 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-710000             | force-systemd-flag-710000 | jenkins | v1.33.1 | 19 Jul 24 07:31 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-710000          | force-systemd-flag-710000 | jenkins | v1.33.1 | 19 Jul 24 07:31 PDT | 19 Jul 24 07:31 PDT |
	| start   | -p cert-expiration-134000             | cert-expiration-134000    | jenkins | v1.33.1 | 19 Jul 24 07:31 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-033000 ssh               | docker-flags-033000       | jenkins | v1.33.1 | 19 Jul 24 07:32 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-033000 ssh               | docker-flags-033000       | jenkins | v1.33.1 | 19 Jul 24 07:32 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-033000                | docker-flags-033000       | jenkins | v1.33.1 | 19 Jul 24 07:32 PDT | 19 Jul 24 07:32 PDT |
	| start   | -p cert-options-480000                | cert-options-480000       | jenkins | v1.33.1 | 19 Jul 24 07:32 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-480000 ssh               | cert-options-480000       | jenkins | v1.33.1 | 19 Jul 24 07:32 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-480000 -- sudo        | cert-options-480000       | jenkins | v1.33.1 | 19 Jul 24 07:32 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-480000                | cert-options-480000       | jenkins | v1.33.1 | 19 Jul 24 07:32 PDT | 19 Jul 24 07:32 PDT |
	| start   | -p running-upgrade-059000             | minikube                  | jenkins | v1.26.0 | 19 Jul 24 07:32 PDT | 19 Jul 24 07:33 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-059000             | running-upgrade-059000    | jenkins | v1.33.1 | 19 Jul 24 07:33 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-134000             | cert-expiration-134000    | jenkins | v1.33.1 | 19 Jul 24 07:35 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-134000             | cert-expiration-134000    | jenkins | v1.33.1 | 19 Jul 24 07:35 PDT | 19 Jul 24 07:35 PDT |
	| start   | -p kubernetes-upgrade-997000          | kubernetes-upgrade-997000 | jenkins | v1.33.1 | 19 Jul 24 07:35 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-997000          | kubernetes-upgrade-997000 | jenkins | v1.33.1 | 19 Jul 24 07:35 PDT | 19 Jul 24 07:35 PDT |
	| start   | -p kubernetes-upgrade-997000          | kubernetes-upgrade-997000 | jenkins | v1.33.1 | 19 Jul 24 07:35 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-997000          | kubernetes-upgrade-997000 | jenkins | v1.33.1 | 19 Jul 24 07:35 PDT | 19 Jul 24 07:35 PDT |
	| start   | -p stopped-upgrade-109000             | minikube                  | jenkins | v1.26.0 | 19 Jul 24 07:35 PDT | 19 Jul 24 07:36 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-109000 stop           | minikube                  | jenkins | v1.26.0 | 19 Jul 24 07:36 PDT | 19 Jul 24 07:36 PDT |
	| start   | -p stopped-upgrade-109000             | stopped-upgrade-109000    | jenkins | v1.33.1 | 19 Jul 24 07:36 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 07:36:23
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 07:36:23.303515    8572 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:36:23.303679    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:36:23.303683    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:36:23.303686    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:36:23.303842    8572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:36:23.305130    8572 out.go:298] Setting JSON to false
	I0719 07:36:23.324734    8572 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5752,"bootTime":1721394031,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:36:23.324809    8572 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:36:23.330111    8572 out.go:177] * [stopped-upgrade-109000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:36:23.337058    8572 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:36:23.337124    8572 notify.go:220] Checking for updates...
	I0719 07:36:23.344177    8572 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:36:23.347099    8572 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:36:23.351094    8572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:36:23.354077    8572 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:36:23.357063    8572 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:36:23.360270    8572 config.go:182] Loaded profile config "stopped-upgrade-109000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:36:23.364031    8572 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0719 07:36:23.366964    8572 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:36:23.371009    8572 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 07:36:23.381081    8572 start.go:297] selected driver: qemu2
	I0719 07:36:23.381089    8572 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-109000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51405 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-109000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0719 07:36:23.381155    8572 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:36:23.384087    8572 cni.go:84] Creating CNI manager for ""
	I0719 07:36:23.384107    8572 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:36:23.384133    8572 start.go:340] cluster config:
	{Name:stopped-upgrade-109000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51405 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-109000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0719 07:36:23.384192    8572 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:36:23.392108    8572 out.go:177] * Starting "stopped-upgrade-109000" primary control-plane node in "stopped-upgrade-109000" cluster
	I0719 07:36:23.396037    8572 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0719 07:36:23.396054    8572 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0719 07:36:23.396067    8572 cache.go:56] Caching tarball of preloaded images
	I0719 07:36:23.396143    8572 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:36:23.396149    8572 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0719 07:36:23.396213    8572 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/config.json ...
	I0719 07:36:23.396712    8572 start.go:360] acquireMachinesLock for stopped-upgrade-109000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:36:23.396752    8572 start.go:364] duration metric: took 33.125µs to acquireMachinesLock for "stopped-upgrade-109000"
	I0719 07:36:23.396761    8572 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:36:23.396767    8572 fix.go:54] fixHost starting: 
	I0719 07:36:23.396888    8572 fix.go:112] recreateIfNeeded on stopped-upgrade-109000: state=Stopped err=<nil>
	W0719 07:36:23.396897    8572 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:36:23.405043    8572 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-109000" ...
	I0719 07:36:22.006174    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:36:22.006420    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:36:22.031939    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:36:22.032063    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:36:22.048197    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:36:22.048277    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:36:22.061844    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:36:22.061916    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:36:22.073681    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:36:22.073747    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:36:22.084130    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:36:22.084197    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:36:22.095013    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:36:22.095081    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:36:22.106361    8434 logs.go:276] 0 containers: []
	W0719 07:36:22.106373    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:36:22.106436    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:36:22.117350    8434 logs.go:276] 0 containers: []
	W0719 07:36:22.117365    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:36:22.117373    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:36:22.117379    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:36:22.154813    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:36:22.154821    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:36:22.159106    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:36:22.159113    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:36:22.170671    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:36:22.170683    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:36:22.187650    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:36:22.187661    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:36:22.205365    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:36:22.205378    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:36:22.218075    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:36:22.218086    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:36:22.256400    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:36:22.256412    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:36:22.270963    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:36:22.270977    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:36:22.285690    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:36:22.285701    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:36:22.296912    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:36:22.296922    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:36:22.316376    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:36:22.316389    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:36:22.340244    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:36:22.340251    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:36:22.353784    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:36:22.353794    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:36:22.365717    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:36:22.365728    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:36:23.409009    8572 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:36:23.409079    8572 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/stopped-upgrade-109000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/stopped-upgrade-109000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/stopped-upgrade-109000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51371-:22,hostfwd=tcp::51372-:2376,hostname=stopped-upgrade-109000 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/stopped-upgrade-109000/disk.qcow2
	I0719 07:36:23.459830    8572 main.go:141] libmachine: STDOUT: 
	I0719 07:36:23.459860    8572 main.go:141] libmachine: STDERR: 
	I0719 07:36:23.459865    8572 main.go:141] libmachine: Waiting for VM to start (ssh -p 51371 docker@127.0.0.1)...
	I0719 07:36:24.882253    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:36:29.884685    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:36:29.884881    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:36:29.896935    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:36:29.896999    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:36:29.908512    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:36:29.908586    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:36:29.919475    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:36:29.919538    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:36:29.930708    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:36:29.930775    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:36:29.941371    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:36:29.941436    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:36:29.951795    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:36:29.951860    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:36:29.962669    8434 logs.go:276] 0 containers: []
	W0719 07:36:29.962678    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:36:29.962732    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:36:29.981413    8434 logs.go:276] 0 containers: []
	W0719 07:36:29.981424    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:36:29.981432    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:36:29.981438    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:36:29.999458    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:36:29.999469    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:36:30.012390    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:36:30.012400    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:36:30.024465    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:36:30.024475    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:36:30.039404    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:36:30.039414    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:36:30.051055    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:36:30.051066    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:36:30.055580    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:36:30.055591    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:36:30.090985    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:36:30.090994    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:36:30.109341    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:36:30.109352    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:36:30.123299    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:36:30.123312    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:36:30.137865    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:36:30.137875    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:36:30.155707    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:36:30.155717    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:36:30.180006    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:36:30.180012    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:36:30.191929    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:36:30.191940    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:36:30.228135    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:36:30.228145    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:36:32.741728    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:36:37.743916    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:36:37.744038    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:36:37.762311    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:36:37.762396    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:36:37.785805    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:36:37.785877    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:36:37.805652    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:36:37.805727    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:36:37.816904    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:36:37.816976    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:36:37.832002    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:36:37.832065    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:36:37.845835    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:36:37.845906    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:36:37.857394    8434 logs.go:276] 0 containers: []
	W0719 07:36:37.857406    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:36:37.857460    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:36:37.867447    8434 logs.go:276] 0 containers: []
	W0719 07:36:37.867458    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:36:37.867466    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:36:37.867471    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:36:37.883811    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:36:37.883821    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:36:37.908622    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:36:37.908633    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:36:37.943374    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:36:37.943386    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:36:37.963932    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:36:37.963942    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:36:37.975522    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:36:37.975536    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:36:37.991574    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:36:37.991587    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:36:38.006419    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:36:38.006429    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:36:38.018576    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:36:38.018586    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:36:38.056356    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:36:38.056363    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:36:38.060704    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:36:38.060710    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:36:38.072097    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:36:38.072113    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:36:38.090654    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:36:38.090665    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:36:38.109921    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:36:38.109930    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:36:38.123722    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:36:38.123734    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:36:40.639620    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:36:43.343788    8572 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/config.json ...
	I0719 07:36:43.344492    8572 machine.go:94] provisionDockerMachine start ...
	I0719 07:36:43.344679    8572 main.go:141] libmachine: Using SSH client type: native
	I0719 07:36:43.345180    8572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c42a10] 0x100c45270 <nil>  [] 0s} localhost 51371 <nil> <nil>}
	I0719 07:36:43.345194    8572 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 07:36:43.440924    8572 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 07:36:43.440968    8572 buildroot.go:166] provisioning hostname "stopped-upgrade-109000"
	I0719 07:36:43.441080    8572 main.go:141] libmachine: Using SSH client type: native
	I0719 07:36:43.441381    8572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c42a10] 0x100c45270 <nil>  [] 0s} localhost 51371 <nil> <nil>}
	I0719 07:36:43.441394    8572 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-109000 && echo "stopped-upgrade-109000" | sudo tee /etc/hostname
	I0719 07:36:43.530844    8572 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-109000
	
	I0719 07:36:43.530948    8572 main.go:141] libmachine: Using SSH client type: native
	I0719 07:36:43.531170    8572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c42a10] 0x100c45270 <nil>  [] 0s} localhost 51371 <nil> <nil>}
	I0719 07:36:43.531186    8572 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-109000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-109000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-109000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 07:36:43.606133    8572 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 07:36:43.606146    8572 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-5980/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-5980/.minikube}
	I0719 07:36:43.606154    8572 buildroot.go:174] setting up certificates
	I0719 07:36:43.606158    8572 provision.go:84] configureAuth start
	I0719 07:36:43.606162    8572 provision.go:143] copyHostCerts
	I0719 07:36:43.606244    8572 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-5980/.minikube/ca.pem, removing ...
	I0719 07:36:43.606253    8572 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-5980/.minikube/ca.pem
	I0719 07:36:43.606489    8572 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-5980/.minikube/ca.pem (1078 bytes)
	I0719 07:36:43.606667    8572 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-5980/.minikube/cert.pem, removing ...
	I0719 07:36:43.606671    8572 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-5980/.minikube/cert.pem
	I0719 07:36:43.606724    8572 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-5980/.minikube/cert.pem (1123 bytes)
	I0719 07:36:43.606821    8572 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-5980/.minikube/key.pem, removing ...
	I0719 07:36:43.606826    8572 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-5980/.minikube/key.pem
	I0719 07:36:43.606872    8572 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-5980/.minikube/key.pem (1679 bytes)
	I0719 07:36:43.606953    8572 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-109000 san=[127.0.0.1 localhost minikube stopped-upgrade-109000]
	I0719 07:36:43.768686    8572 provision.go:177] copyRemoteCerts
	I0719 07:36:43.768728    8572 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 07:36:43.768737    8572 sshutil.go:53] new ssh client: &{IP:localhost Port:51371 SSHKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/stopped-upgrade-109000/id_rsa Username:docker}
	I0719 07:36:43.806159    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 07:36:43.813135    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0719 07:36:43.819927    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 07:36:43.826396    8572 provision.go:87] duration metric: took 220.330334ms to configureAuth
	I0719 07:36:43.826409    8572 buildroot.go:189] setting minikube options for container-runtime
	I0719 07:36:43.826525    8572 config.go:182] Loaded profile config "stopped-upgrade-109000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:36:43.826565    8572 main.go:141] libmachine: Using SSH client type: native
	I0719 07:36:43.826652    8572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c42a10] 0x100c45270 <nil>  [] 0s} localhost 51371 <nil> <nil>}
	I0719 07:36:43.826659    8572 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 07:36:43.896346    8572 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 07:36:43.896355    8572 buildroot.go:70] root file system type: tmpfs
	I0719 07:36:43.896406    8572 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 07:36:43.896462    8572 main.go:141] libmachine: Using SSH client type: native
	I0719 07:36:43.896577    8572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c42a10] 0x100c45270 <nil>  [] 0s} localhost 51371 <nil> <nil>}
	I0719 07:36:43.896613    8572 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 07:36:43.972903    8572 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 07:36:43.972964    8572 main.go:141] libmachine: Using SSH client type: native
	I0719 07:36:43.973135    8572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c42a10] 0x100c45270 <nil>  [] 0s} localhost 51371 <nil> <nil>}
	I0719 07:36:43.973144    8572 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 07:36:44.324481    8572 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 07:36:44.324496    8572 machine.go:97] duration metric: took 980.451541ms to provisionDockerMachine
	I0719 07:36:44.324503    8572 start.go:293] postStartSetup for "stopped-upgrade-109000" (driver="qemu2")
	I0719 07:36:44.324510    8572 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 07:36:44.324580    8572 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 07:36:44.324593    8572 sshutil.go:53] new ssh client: &{IP:localhost Port:51371 SSHKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/stopped-upgrade-109000/id_rsa Username:docker}
	I0719 07:36:44.361462    8572 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 07:36:44.362935    8572 info.go:137] Remote host: Buildroot 2021.02.12
	I0719 07:36:44.362942    8572 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-5980/.minikube/addons for local assets ...
	I0719 07:36:44.363043    8572 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-5980/.minikube/files for local assets ...
	I0719 07:36:44.363165    8572 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-5980/.minikube/files/etc/ssl/certs/64732.pem -> 64732.pem in /etc/ssl/certs
	I0719 07:36:44.363294    8572 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 07:36:44.366303    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/files/etc/ssl/certs/64732.pem --> /etc/ssl/certs/64732.pem (1708 bytes)
	I0719 07:36:44.375037    8572 start.go:296] duration metric: took 50.549666ms for postStartSetup
	I0719 07:36:44.375058    8572 fix.go:56] duration metric: took 20.980955084s for fixHost
	I0719 07:36:44.375109    8572 main.go:141] libmachine: Using SSH client type: native
	I0719 07:36:44.375242    8572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c42a10] 0x100c45270 <nil>  [] 0s} localhost 51371 <nil> <nil>}
	I0719 07:36:44.375247    8572 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 07:36:44.444355    8572 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721399804.194076629
	
	I0719 07:36:44.444365    8572 fix.go:216] guest clock: 1721399804.194076629
	I0719 07:36:44.444369    8572 fix.go:229] Guest: 2024-07-19 07:36:44.194076629 -0700 PDT Remote: 2024-07-19 07:36:44.375059 -0700 PDT m=+21.106254293 (delta=-180.982371ms)
	I0719 07:36:44.444382    8572 fix.go:200] guest clock delta is within tolerance: -180.982371ms
	I0719 07:36:44.444385    8572 start.go:83] releasing machines lock for "stopped-upgrade-109000", held for 21.050324041s
	I0719 07:36:44.444456    8572 ssh_runner.go:195] Run: cat /version.json
	I0719 07:36:44.444466    8572 sshutil.go:53] new ssh client: &{IP:localhost Port:51371 SSHKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/stopped-upgrade-109000/id_rsa Username:docker}
	I0719 07:36:44.444456    8572 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 07:36:44.444502    8572 sshutil.go:53] new ssh client: &{IP:localhost Port:51371 SSHKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/stopped-upgrade-109000/id_rsa Username:docker}
	W0719 07:36:44.445065    8572 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51371: connect: connection refused
	I0719 07:36:44.445086    8572 retry.go:31] will retry after 171.846708ms: dial tcp [::1]:51371: connect: connection refused
	W0719 07:36:44.657666    8572 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0719 07:36:44.657738    8572 ssh_runner.go:195] Run: systemctl --version
	I0719 07:36:44.659822    8572 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 07:36:44.661685    8572 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 07:36:44.661727    8572 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0719 07:36:44.665307    8572 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0719 07:36:44.670573    8572 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 07:36:44.670581    8572 start.go:495] detecting cgroup driver to use...
	I0719 07:36:44.670661    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 07:36:44.677385    8572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0719 07:36:44.680800    8572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 07:36:44.683624    8572 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 07:36:44.683652    8572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 07:36:44.686376    8572 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 07:36:44.689593    8572 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 07:36:44.692914    8572 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 07:36:44.695976    8572 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 07:36:44.698840    8572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 07:36:44.701776    8572 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 07:36:44.705080    8572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 07:36:44.708262    8572 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 07:36:44.710785    8572 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 07:36:44.713715    8572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 07:36:44.775543    8572 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 07:36:44.785441    8572 start.go:495] detecting cgroup driver to use...
	I0719 07:36:44.785507    8572 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 07:36:44.791791    8572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 07:36:44.796525    8572 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 07:36:44.802303    8572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 07:36:44.807103    8572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 07:36:44.811845    8572 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 07:36:44.853383    8572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 07:36:44.858764    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 07:36:44.864127    8572 ssh_runner.go:195] Run: which cri-dockerd
	I0719 07:36:44.865346    8572 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 07:36:44.868445    8572 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 07:36:44.873303    8572 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 07:36:44.939465    8572 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 07:36:44.998726    8572 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 07:36:44.998805    8572 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 07:36:45.003816    8572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 07:36:45.064327    8572 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 07:36:46.189477    8572 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.125602625s)
	I0719 07:36:46.189536    8572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 07:36:46.194537    8572 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0719 07:36:46.200679    8572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 07:36:46.205625    8572 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 07:36:46.269928    8572 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 07:36:46.329120    8572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 07:36:46.394640    8572 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 07:36:46.400907    8572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 07:36:46.405931    8572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 07:36:46.477191    8572 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 07:36:46.514570    8572 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 07:36:46.514640    8572 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 07:36:46.516675    8572 start.go:563] Will wait 60s for crictl version
	I0719 07:36:46.516719    8572 ssh_runner.go:195] Run: which crictl
	I0719 07:36:46.518642    8572 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 07:36:46.532614    8572 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0719 07:36:46.532685    8572 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 07:36:46.549793    8572 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 07:36:46.570955    8572 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0719 07:36:46.571078    8572 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0719 07:36:46.572296    8572 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 07:36:46.576263    8572 kubeadm.go:883] updating cluster {Name:stopped-upgrade-109000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51405 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-109000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0719 07:36:46.576309    8572 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0719 07:36:46.576351    8572 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 07:36:46.594619    8572 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 07:36:46.594628    8572 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0719 07:36:46.594670    8572 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 07:36:46.597703    8572 ssh_runner.go:195] Run: which lz4
	I0719 07:36:46.598954    8572 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 07:36:46.600132    8572 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 07:36:46.600145    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0719 07:36:47.527478    8572 docker.go:649] duration metric: took 928.905791ms to copy over tarball
	I0719 07:36:47.527543    8572 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 07:36:45.637811    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:36:45.638035    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:36:45.661001    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:36:45.661115    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:36:45.677163    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:36:45.677248    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:36:45.689697    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:36:45.689770    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:36:45.702464    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:36:45.702535    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:36:45.713452    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:36:45.713521    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:36:45.724207    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:36:45.724273    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:36:45.734686    8434 logs.go:276] 0 containers: []
	W0719 07:36:45.734697    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:36:45.734753    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:36:45.744468    8434 logs.go:276] 0 containers: []
	W0719 07:36:45.744483    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:36:45.744490    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:36:45.744496    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:36:45.781205    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:36:45.781214    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:36:45.795302    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:36:45.795311    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:36:45.812912    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:36:45.812922    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:36:45.836868    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:36:45.836879    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:36:45.841166    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:36:45.841174    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:36:45.852749    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:36:45.852764    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:36:45.867036    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:36:45.867048    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:36:45.878020    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:36:45.878031    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:36:45.890130    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:36:45.890142    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:36:45.901635    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:36:45.901646    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:36:45.920056    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:36:45.920069    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:36:45.935127    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:36:45.935136    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:36:45.970264    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:36:45.970273    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:36:45.986514    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:36:45.986523    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:36:48.701711    8572 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.17457175s)
	I0719 07:36:48.701725    8572 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 07:36:48.717014    8572 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 07:36:48.720240    8572 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0719 07:36:48.725314    8572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 07:36:48.791136    8572 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 07:36:50.520994    8572 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.730403417s)
	I0719 07:36:50.521108    8572 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 07:36:50.545577    8572 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 07:36:50.545587    8572 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0719 07:36:50.545592    8572 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 07:36:50.551049    8572 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 07:36:50.553160    8572 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 07:36:50.554566    8572 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 07:36:50.554581    8572 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0719 07:36:50.555715    8572 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0719 07:36:50.555861    8572 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 07:36:50.557132    8572 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0719 07:36:50.558396    8572 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0719 07:36:50.558435    8572 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0719 07:36:50.558451    8572 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0719 07:36:50.559534    8572 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0719 07:36:50.559690    8572 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0719 07:36:50.561043    8572 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 07:36:50.561076    8572 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0719 07:36:50.562010    8572 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0719 07:36:50.563308    8572 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 07:36:50.881341    8572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 07:36:50.892327    8572 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0719 07:36:50.892356    8572 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 07:36:50.892405    8572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 07:36:50.902739    8572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0719 07:36:50.924510    8572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0719 07:36:50.931171    8572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0719 07:36:50.939544    8572 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0719 07:36:50.939567    8572 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0719 07:36:50.939620    8572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0719 07:36:50.945668    8572 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0719 07:36:50.945688    8572 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0719 07:36:50.945742    8572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0719 07:36:50.950330    8572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0719 07:36:50.955620    8572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0719 07:36:50.972405    8572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0719 07:36:50.982283    8572 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0719 07:36:50.982303    8572 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0719 07:36:50.982351    8572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0719 07:36:50.992198    8572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0719 07:36:50.992410    8572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0719 07:36:51.002274    8572 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0719 07:36:51.002291    8572 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0719 07:36:51.002349    8572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0719 07:36:51.012823    8572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0719 07:36:51.012942    8572 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0719 07:36:51.014416    8572 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0719 07:36:51.014428    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0719 07:36:51.022136    8572 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0719 07:36:51.022145    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0719 07:36:51.037906    8572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0719 07:36:51.058409    8572 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0719 07:36:51.058427    8572 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0719 07:36:51.058444    8572 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0719 07:36:51.058495    8572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W0719 07:36:51.060550    8572 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0719 07:36:51.060662    8572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0719 07:36:51.069314    8572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0719 07:36:51.069453    8572 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0719 07:36:51.076375    8572 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0719 07:36:51.076384    8572 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0719 07:36:51.076395    8572 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 07:36:51.076408    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0719 07:36:51.076437    8572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0719 07:36:51.102665    8572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0719 07:36:51.102799    8572 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0719 07:36:51.115095    8572 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0719 07:36:51.115122    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0719 07:36:51.193262    8572 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0719 07:36:51.193355    8572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 07:36:51.200977    8572 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0719 07:36:51.200990    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0719 07:36:51.250681    8572 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0719 07:36:51.250709    8572 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 07:36:51.250770    8572 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 07:36:51.296849    8572 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0719 07:36:51.297965    8572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0719 07:36:51.298089    8572 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0719 07:36:51.304265    8572 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0719 07:36:51.304294    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0719 07:36:51.382943    8572 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0719 07:36:51.382966    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0719 07:36:51.687657    8572 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0719 07:36:51.687681    8572 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0719 07:36:51.687697    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0719 07:36:51.835547    8572 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0719 07:36:51.835588    8572 cache_images.go:92] duration metric: took 1.290365375s to LoadCachedImages
	W0719 07:36:51.835634    8572 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0719 07:36:51.835641    8572 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0719 07:36:51.835695    8572 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-109000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-109000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 07:36:51.835755    8572 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 07:36:51.849093    8572 cni.go:84] Creating CNI manager for ""
	I0719 07:36:51.849105    8572 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:36:51.849112    8572 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 07:36:51.849122    8572 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-109000 NodeName:stopped-upgrade-109000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 07:36:51.849193    8572 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-109000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 07:36:51.849247    8572 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0719 07:36:51.851990    8572 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 07:36:51.852018    8572 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 07:36:51.855014    8572 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0719 07:36:51.860132    8572 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 07:36:51.864996    8572 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0719 07:36:51.870280    8572 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0719 07:36:51.871629    8572 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 07:36:51.875288    8572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 07:36:51.935446    8572 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 07:36:51.945644    8572 certs.go:68] Setting up /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000 for IP: 10.0.2.15
	I0719 07:36:51.945653    8572 certs.go:194] generating shared ca certs ...
	I0719 07:36:51.945665    8572 certs.go:226] acquiring lock for ca certs: {Name:mk9d0c6de3978c1656d7567742ecf2a49cbc189d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:36:51.945833    8572 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/ca.key
	I0719 07:36:51.945886    8572 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/proxy-client-ca.key
	I0719 07:36:51.945893    8572 certs.go:256] generating profile certs ...
	I0719 07:36:51.945965    8572 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/client.key
	I0719 07:36:51.945982    8572 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.key.97e204ae
	I0719 07:36:51.945994    8572 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.crt.97e204ae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0719 07:36:52.018591    8572 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.crt.97e204ae ...
	I0719 07:36:52.018604    8572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.crt.97e204ae: {Name:mkaee78d5abd5d3da8d808e03ceb3cadfca2eaf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:36:52.019133    8572 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.key.97e204ae ...
	I0719 07:36:52.019139    8572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.key.97e204ae: {Name:mkd0cfab99ed4eb56f5637ac550bdd2dad781a10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:36:52.019296    8572 certs.go:381] copying /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.crt.97e204ae -> /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.crt
	I0719 07:36:52.019436    8572 certs.go:385] copying /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.key.97e204ae -> /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.key
	I0719 07:36:52.019582    8572 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/proxy-client.key
	I0719 07:36:52.019716    8572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/6473.pem (1338 bytes)
	W0719 07:36:52.019750    8572 certs.go:480] ignoring /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/6473_empty.pem, impossibly tiny 0 bytes
	I0719 07:36:52.019758    8572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 07:36:52.019783    8572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem (1078 bytes)
	I0719 07:36:52.019805    8572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem (1123 bytes)
	I0719 07:36:52.019825    8572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/key.pem (1679 bytes)
	I0719 07:36:52.019863    8572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/files/etc/ssl/certs/64732.pem (1708 bytes)
	I0719 07:36:52.020257    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 07:36:52.027183    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 07:36:52.034372    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 07:36:52.041539    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 07:36:52.048055    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 07:36:52.055011    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 07:36:52.062209    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 07:36:52.069178    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 07:36:52.075658    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/files/etc/ssl/certs/64732.pem --> /usr/share/ca-certificates/64732.pem (1708 bytes)
	I0719 07:36:52.082792    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 07:36:52.089827    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/6473.pem --> /usr/share/ca-certificates/6473.pem (1338 bytes)
	I0719 07:36:52.096353    8572 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 07:36:52.101180    8572 ssh_runner.go:195] Run: openssl version
	I0719 07:36:52.102932    8572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6473.pem && ln -fs /usr/share/ca-certificates/6473.pem /etc/ssl/certs/6473.pem"
	I0719 07:36:52.106120    8572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6473.pem
	I0719 07:36:52.107548    8572 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:20 /usr/share/ca-certificates/6473.pem
	I0719 07:36:52.107564    8572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6473.pem
	I0719 07:36:52.109391    8572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6473.pem /etc/ssl/certs/51391683.0"
	I0719 07:36:52.112182    8572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64732.pem && ln -fs /usr/share/ca-certificates/64732.pem /etc/ssl/certs/64732.pem"
	I0719 07:36:52.115334    8572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/64732.pem
	I0719 07:36:52.116613    8572 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:20 /usr/share/ca-certificates/64732.pem
	I0719 07:36:52.116634    8572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64732.pem
	I0719 07:36:52.118233    8572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64732.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 07:36:52.121369    8572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 07:36:52.124420    8572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 07:36:52.125765    8572 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:32 /usr/share/ca-certificates/minikubeCA.pem
	I0719 07:36:52.125784    8572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 07:36:52.127687    8572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 07:36:52.130993    8572 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 07:36:52.132587    8572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 07:36:52.134815    8572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 07:36:52.136861    8572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 07:36:52.138716    8572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 07:36:52.140434    8572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 07:36:52.142048    8572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 07:36:52.143732    8572 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-109000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51405 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-109000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0719 07:36:52.143795    8572 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 07:36:52.154040    8572 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 07:36:52.157255    8572 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 07:36:52.157261    8572 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 07:36:52.157283    8572 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 07:36:52.159967    8572 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 07:36:52.160262    8572 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-109000" does not appear in /Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:36:52.160358    8572 kubeconfig.go:62] /Users/jenkins/minikube-integration/19302-5980/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-109000" cluster setting kubeconfig missing "stopped-upgrade-109000" context setting]
	I0719 07:36:52.160574    8572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/kubeconfig: {Name:mk0c17b3830610cdae4c834f6bae9631cabc7388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:36:52.161071    8572 kapi.go:59] client config for stopped-upgrade-109000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-5980/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101fd7790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 07:36:52.161423    8572 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 07:36:52.164041    8572 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-109000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0719 07:36:52.164047    8572 kubeadm.go:1160] stopping kube-system containers ...
	I0719 07:36:52.164086    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 07:36:52.174675    8572 docker.go:483] Stopping containers: [42ae714b96fa 9130f74d6072 f86743dde90f a6a24cbd561c 04c61becd2f7 c6e1d72d884c dcec56b7a639 b9e533e1c490]
	I0719 07:36:52.174734    8572 ssh_runner.go:195] Run: docker stop 42ae714b96fa 9130f74d6072 f86743dde90f a6a24cbd561c 04c61becd2f7 c6e1d72d884c dcec56b7a639 b9e533e1c490
	I0719 07:36:52.185098    8572 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 07:36:52.190597    8572 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 07:36:52.193754    8572 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 07:36:52.193760    8572 kubeadm.go:157] found existing configuration files:
	
	I0719 07:36:52.193790    8572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/admin.conf
	I0719 07:36:52.196301    8572 kubeadm.go:163] "https://control-plane.minikube.internal:51405" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 07:36:52.196328    8572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 07:36:52.199089    8572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/kubelet.conf
	I0719 07:36:52.201978    8572 kubeadm.go:163] "https://control-plane.minikube.internal:51405" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 07:36:52.202000    8572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 07:36:52.204596    8572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/controller-manager.conf
	I0719 07:36:52.207258    8572 kubeadm.go:163] "https://control-plane.minikube.internal:51405" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 07:36:52.207276    8572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 07:36:52.210074    8572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/scheduler.conf
	I0719 07:36:52.212621    8572 kubeadm.go:163] "https://control-plane.minikube.internal:51405" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 07:36:52.212638    8572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 07:36:52.215236    8572 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 07:36:52.218165    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 07:36:52.240619    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 07:36:52.655105    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 07:36:52.768454    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 07:36:52.789395    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 07:36:52.812684    8572 api_server.go:52] waiting for apiserver process to appear ...
	I0719 07:36:52.812758    8572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 07:36:48.500422    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:36:53.314723    8572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 07:36:53.814644    8572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 07:36:53.819201    8572 api_server.go:72] duration metric: took 1.006777125s to wait for apiserver process to appear ...
	I0719 07:36:53.819210    8572 api_server.go:88] waiting for apiserver healthz status ...
	I0719 07:36:53.819220    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:36:53.501185    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0719 07:36:53.501298    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:36:53.516693    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:36:53.516766    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:36:53.529754    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:36:53.529826    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:36:53.542223    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:36:53.542296    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:36:53.554514    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:36:53.554585    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:36:53.566528    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:36:53.566600    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:36:53.579423    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:36:53.579492    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:36:53.594726    8434 logs.go:276] 0 containers: []
	W0719 07:36:53.594740    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:36:53.594803    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:36:53.607895    8434 logs.go:276] 0 containers: []
	W0719 07:36:53.607909    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:36:53.607917    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:36:53.607923    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:36:53.647507    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:36:53.647522    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:36:53.660210    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:36:53.660224    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:36:53.677476    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:36:53.677493    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:36:53.694541    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:36:53.694554    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:36:53.715496    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:36:53.715516    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:36:53.733334    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:36:53.733355    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:36:53.746768    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:36:53.746780    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:36:53.765327    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:36:53.765341    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:36:53.789562    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:36:53.789576    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:36:53.801827    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:36:53.801838    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:36:53.839991    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:36:53.840009    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:36:53.845118    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:36:53.845127    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:36:53.859657    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:36:53.859669    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:36:53.871861    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:36:53.871873    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:36:56.387363    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:36:58.820250    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:36:58.820300    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:01.388749    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:01.388914    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:37:01.406120    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:37:01.406210    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:37:01.419651    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:37:01.419726    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:37:01.431106    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:37:01.431172    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:37:01.441836    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:37:01.441908    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:37:01.452155    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:37:01.452220    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:37:01.462704    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:37:01.462759    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:37:01.473325    8434 logs.go:276] 0 containers: []
	W0719 07:37:01.473339    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:37:01.473396    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:37:01.484927    8434 logs.go:276] 0 containers: []
	W0719 07:37:01.484937    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:37:01.484944    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:37:01.484950    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:37:01.523994    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:37:01.524003    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:37:01.537934    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:37:01.537947    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:37:01.549458    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:37:01.549470    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:37:01.563071    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:37:01.563081    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:37:01.582711    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:37:01.582725    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:37:01.600050    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:37:01.600063    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:37:01.618120    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:37:01.618133    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:37:01.632450    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:37:01.632462    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:37:01.643511    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:37:01.643523    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:37:01.659125    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:37:01.659138    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:37:01.671041    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:37:01.671052    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:37:01.675633    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:37:01.675641    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:37:01.712245    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:37:01.712257    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:37:01.724310    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:37:01.724320    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:37:03.820256    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:03.820306    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:04.247310    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:08.820353    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:08.820398    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:09.248987    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:09.249155    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:37:09.261292    8434 logs.go:276] 2 containers: [8f87efd9c8ae 5e4a72cfc197]
	I0719 07:37:09.261376    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:37:09.272761    8434 logs.go:276] 2 containers: [602f667d648b 3fe4e035dfe4]
	I0719 07:37:09.272834    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:37:09.285634    8434 logs.go:276] 1 containers: [eb68908b9d97]
	I0719 07:37:09.285708    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:37:09.300496    8434 logs.go:276] 2 containers: [612aac33241a b2c79da72382]
	I0719 07:37:09.300570    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:37:09.312342    8434 logs.go:276] 1 containers: [76e9390acb3f]
	I0719 07:37:09.312407    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:37:09.326555    8434 logs.go:276] 2 containers: [838d326ab661 6510bef59285]
	I0719 07:37:09.326626    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:37:09.336968    8434 logs.go:276] 0 containers: []
	W0719 07:37:09.336979    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:37:09.337035    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:37:09.348064    8434 logs.go:276] 0 containers: []
	W0719 07:37:09.348075    8434 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:37:09.348083    8434 logs.go:123] Gathering logs for kube-apiserver [5e4a72cfc197] ...
	I0719 07:37:09.348088    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e4a72cfc197"
	I0719 07:37:09.359344    8434 logs.go:123] Gathering logs for kube-scheduler [612aac33241a] ...
	I0719 07:37:09.359355    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 612aac33241a"
	I0719 07:37:09.376438    8434 logs.go:123] Gathering logs for kube-scheduler [b2c79da72382] ...
	I0719 07:37:09.376454    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2c79da72382"
	I0719 07:37:09.389043    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:37:09.389054    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:37:09.393568    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:37:09.393575    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:37:09.429658    8434 logs.go:123] Gathering logs for kube-apiserver [8f87efd9c8ae] ...
	I0719 07:37:09.429675    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f87efd9c8ae"
	I0719 07:37:09.443300    8434 logs.go:123] Gathering logs for kube-proxy [76e9390acb3f] ...
	I0719 07:37:09.443315    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 76e9390acb3f"
	I0719 07:37:09.455226    8434 logs.go:123] Gathering logs for etcd [3fe4e035dfe4] ...
	I0719 07:37:09.455238    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fe4e035dfe4"
	I0719 07:37:09.470530    8434 logs.go:123] Gathering logs for coredns [eb68908b9d97] ...
	I0719 07:37:09.470543    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eb68908b9d97"
	I0719 07:37:09.485783    8434 logs.go:123] Gathering logs for kube-controller-manager [838d326ab661] ...
	I0719 07:37:09.485796    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 838d326ab661"
	I0719 07:37:09.504157    8434 logs.go:123] Gathering logs for kube-controller-manager [6510bef59285] ...
	I0719 07:37:09.504166    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6510bef59285"
	I0719 07:37:09.519764    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:37:09.519780    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:37:09.532471    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:37:09.532486    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:37:09.567509    8434 logs.go:123] Gathering logs for etcd [602f667d648b] ...
	I0719 07:37:09.567518    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 602f667d648b"
	I0719 07:37:09.581334    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:37:09.581345    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:37:12.107504    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:17.109400    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:17.109453    8434 kubeadm.go:597] duration metric: took 4m5.016966625s to restartPrimaryControlPlane
	W0719 07:37:17.109504    8434 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 07:37:17.109527    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0719 07:37:18.045159    8434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 07:37:18.050064    8434 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 07:37:18.052905    8434 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 07:37:18.055536    8434 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 07:37:18.055543    8434 kubeadm.go:157] found existing configuration files:
	
	I0719 07:37:18.055566    8434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/admin.conf
	I0719 07:37:18.058657    8434 kubeadm.go:163] "https://control-plane.minikube.internal:51189" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 07:37:18.058682    8434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 07:37:18.061888    8434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/kubelet.conf
	I0719 07:37:18.064390    8434 kubeadm.go:163] "https://control-plane.minikube.internal:51189" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 07:37:18.064412    8434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 07:37:18.067023    8434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/controller-manager.conf
	I0719 07:37:18.070187    8434 kubeadm.go:163] "https://control-plane.minikube.internal:51189" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 07:37:18.070212    8434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 07:37:18.072981    8434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/scheduler.conf
	I0719 07:37:18.075363    8434 kubeadm.go:163] "https://control-plane.minikube.internal:51189" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51189 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 07:37:18.075383    8434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 07:37:18.078477    8434 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 07:37:18.095386    8434 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0719 07:37:18.095446    8434 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 07:37:18.143796    8434 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 07:37:18.143854    8434 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 07:37:18.143909    8434 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 07:37:18.195393    8434 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 07:37:13.820977    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:13.821023    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:18.204627    8434 out.go:204]   - Generating certificates and keys ...
	I0719 07:37:18.204661    8434 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 07:37:18.204697    8434 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 07:37:18.204772    8434 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 07:37:18.204822    8434 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 07:37:18.204878    8434 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 07:37:18.204933    8434 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 07:37:18.204971    8434 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 07:37:18.205007    8434 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 07:37:18.205055    8434 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 07:37:18.205100    8434 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 07:37:18.205131    8434 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 07:37:18.205159    8434 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 07:37:18.555124    8434 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 07:37:18.608243    8434 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 07:37:18.662664    8434 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 07:37:18.691156    8434 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 07:37:18.721204    8434 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 07:37:18.721524    8434 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 07:37:18.721665    8434 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 07:37:18.795327    8434 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 07:37:18.821661    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:18.821687    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:18.799523    8434 out.go:204]   - Booting up control plane ...
	I0719 07:37:18.799567    8434 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 07:37:18.800252    8434 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 07:37:18.800626    8434 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 07:37:18.800872    8434 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 07:37:18.801633    8434 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 07:37:23.303221    8434 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501344 seconds
	I0719 07:37:23.303301    8434 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 07:37:23.307162    8434 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 07:37:23.817663    8434 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 07:37:23.817770    8434 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-059000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 07:37:24.323407    8434 kubeadm.go:310] [bootstrap-token] Using token: nv695p.iid5xnr7pfj6tlwc
	I0719 07:37:24.329675    8434 out.go:204]   - Configuring RBAC rules ...
	I0719 07:37:24.329769    8434 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 07:37:24.329852    8434 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 07:37:24.333473    8434 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 07:37:24.335183    8434 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 07:37:24.336808    8434 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 07:37:24.338140    8434 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 07:37:24.341930    8434 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 07:37:24.494635    8434 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 07:37:24.728277    8434 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 07:37:24.728796    8434 kubeadm.go:310] 
	I0719 07:37:24.728825    8434 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 07:37:24.728830    8434 kubeadm.go:310] 
	I0719 07:37:24.728873    8434 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 07:37:24.728880    8434 kubeadm.go:310] 
	I0719 07:37:24.728898    8434 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 07:37:24.728937    8434 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 07:37:24.728976    8434 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 07:37:24.728980    8434 kubeadm.go:310] 
	I0719 07:37:24.729012    8434 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 07:37:24.729016    8434 kubeadm.go:310] 
	I0719 07:37:24.729040    8434 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 07:37:24.729043    8434 kubeadm.go:310] 
	I0719 07:37:24.729070    8434 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 07:37:24.729126    8434 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 07:37:24.729180    8434 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 07:37:24.729186    8434 kubeadm.go:310] 
	I0719 07:37:24.729238    8434 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 07:37:24.729284    8434 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 07:37:24.729291    8434 kubeadm.go:310] 
	I0719 07:37:24.729351    8434 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nv695p.iid5xnr7pfj6tlwc \
	I0719 07:37:24.729415    8434 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c0079416ee672a46ea5c9a53cd13d3e504fe5042c2b22c9e2bf67c89ce7740e7 \
	I0719 07:37:24.729429    8434 kubeadm.go:310] 	--control-plane 
	I0719 07:37:24.729432    8434 kubeadm.go:310] 
	I0719 07:37:24.729480    8434 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 07:37:24.729484    8434 kubeadm.go:310] 
	I0719 07:37:24.729529    8434 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nv695p.iid5xnr7pfj6tlwc \
	I0719 07:37:24.729585    8434 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c0079416ee672a46ea5c9a53cd13d3e504fe5042c2b22c9e2bf67c89ce7740e7 
	I0719 07:37:24.729681    8434 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 07:37:24.729691    8434 cni.go:84] Creating CNI manager for ""
	I0719 07:37:24.729700    8434 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:37:24.734199    8434 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 07:37:24.744155    8434 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 07:37:24.747130    8434 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 07:37:24.752013    8434 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 07:37:24.752079    8434 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-059000 minikube.k8s.io/updated_at=2024_07_19T07_37_24_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de minikube.k8s.io/name=running-upgrade-059000 minikube.k8s.io/primary=true
	I0719 07:37:24.752123    8434 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 07:37:24.793559    8434 ops.go:34] apiserver oom_adj: -16
	I0719 07:37:24.793572    8434 kubeadm.go:1113] duration metric: took 41.526209ms to wait for elevateKubeSystemPrivileges
	I0719 07:37:24.793582    8434 kubeadm.go:394] duration metric: took 4m12.715741083s to StartCluster
	I0719 07:37:24.793595    8434 settings.go:142] acquiring lock: {Name:mk67df71d562cbffe9f3bde88489898c395cdfc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:37:24.793771    8434 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:37:24.794162    8434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/kubeconfig: {Name:mk0c17b3830610cdae4c834f6bae9631cabc7388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:37:24.794368    8434 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:37:24.794374    8434 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 07:37:24.794432    8434 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-059000"
	I0719 07:37:24.794441    8434 config.go:182] Loaded profile config "running-upgrade-059000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:37:24.794454    8434 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-059000"
	W0719 07:37:24.794459    8434 addons.go:243] addon storage-provisioner should already be in state true
	I0719 07:37:24.794471    8434 host.go:66] Checking if "running-upgrade-059000" exists ...
	I0719 07:37:24.794496    8434 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-059000"
	I0719 07:37:24.794509    8434 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-059000"
	I0719 07:37:24.799635    8434 out.go:177] * Verifying Kubernetes components...
	I0719 07:37:24.801113    8434 kapi.go:59] client config for running-upgrade-059000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/running-upgrade-059000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-5980/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106557790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 07:37:24.802328    8434 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-059000"
	W0719 07:37:24.802335    8434 addons.go:243] addon default-storageclass should already be in state true
	I0719 07:37:24.802344    8434 host.go:66] Checking if "running-upgrade-059000" exists ...
	I0719 07:37:24.802927    8434 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 07:37:24.802932    8434 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 07:37:24.802938    8434 sshutil.go:53] new ssh client: &{IP:localhost Port:51157 SSHKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/running-upgrade-059000/id_rsa Username:docker}
	I0719 07:37:24.806123    8434 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 07:37:23.822588    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:23.822604    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:24.806234    8434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 07:37:24.809156    8434 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 07:37:24.809161    8434 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 07:37:24.809167    8434 sshutil.go:53] new ssh client: &{IP:localhost Port:51157 SSHKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/running-upgrade-059000/id_rsa Username:docker}
	I0719 07:37:24.877133    8434 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 07:37:24.882256    8434 api_server.go:52] waiting for apiserver process to appear ...
	I0719 07:37:24.882304    8434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 07:37:24.888361    8434 api_server.go:72] duration metric: took 93.983459ms to wait for apiserver process to appear ...
	I0719 07:37:24.888372    8434 api_server.go:88] waiting for apiserver healthz status ...
	I0719 07:37:24.888380    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:24.893456    8434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 07:37:24.940075    8434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 07:37:28.823938    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:28.824012    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:29.890346    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:29.890389    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:33.824524    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:33.824598    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:34.890563    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:34.890588    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:38.826984    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:38.827026    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:39.890809    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:39.890834    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:43.829219    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:43.829276    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:44.891173    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:44.891223    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:48.831559    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:48.831586    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:49.892068    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:49.892111    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:54.892862    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:54.892905    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0719 07:37:55.218832    8434 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0719 07:37:55.224220    8434 out.go:177] * Enabled addons: storage-provisioner
	I0719 07:37:53.831850    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:53.832164    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:37:53.858200    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:37:53.858343    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:37:53.874885    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:37:53.874973    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:37:53.888429    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:37:53.888493    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:37:53.899923    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:37:53.899996    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:37:53.910760    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:37:53.910847    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:37:53.921279    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:37:53.921346    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:37:53.931676    8572 logs.go:276] 0 containers: []
	W0719 07:37:53.931687    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:37:53.931745    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:37:53.942134    8572 logs.go:276] 0 containers: []
	W0719 07:37:53.942146    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:37:53.942155    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:37:53.942161    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:37:53.946294    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:37:53.946300    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:37:53.957715    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:37:53.957730    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:37:53.975391    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:37:53.975402    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:37:54.083189    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:37:54.083200    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:37:54.097252    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:37:54.097262    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:37:54.111491    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:37:54.111502    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:37:54.130128    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:37:54.130143    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:37:54.146135    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:37:54.146146    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:37:54.170356    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:37:54.170365    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:37:54.181681    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:37:54.181693    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:37:54.193042    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:37:54.193052    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:37:54.232520    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:37:54.232532    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:37:54.274130    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:37:54.274140    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:37:54.285428    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:37:54.285441    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:37:56.805233    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:55.232115    8434 addons.go:510] duration metric: took 30.438460292s for enable addons: enabled=[storage-provisioner]
	I0719 07:38:01.806273    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:01.806490    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:01.824257    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:38:01.824358    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:01.838009    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:38:01.838075    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:01.849483    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:38:01.849556    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:01.860268    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:38:01.860331    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:01.870542    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:38:01.870616    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:01.883853    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:38:01.883922    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:01.894041    8572 logs.go:276] 0 containers: []
	W0719 07:38:01.894053    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:01.894105    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:01.904360    8572 logs.go:276] 0 containers: []
	W0719 07:38:01.904370    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:38:01.904378    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:01.904384    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:01.942451    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:38:01.942466    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:38:01.954331    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:38:01.954345    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:01.966218    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:01.966228    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:01.970467    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:38:01.970474    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:38:01.981978    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:38:01.981993    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:38:01.999445    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:38:01.999456    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:38:02.013306    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:38:02.013320    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:38:02.051211    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:38:02.051223    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:38:02.066747    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:38:02.066763    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:38:02.080341    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:38:02.080354    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:38:02.095036    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:02.095047    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:02.132189    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:38:02.132201    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:38:02.146031    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:38:02.146042    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:38:02.160656    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:02.160665    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:37:59.893617    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:59.893715    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:04.688271    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:04.895173    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:04.895212    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:09.690559    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:09.690898    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:09.719066    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:38:09.719180    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:09.737893    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:38:09.737977    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:09.751730    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:38:09.751811    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:09.763748    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:38:09.763817    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:09.774639    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:38:09.774706    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:09.785158    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:38:09.785223    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:09.795406    8572 logs.go:276] 0 containers: []
	W0719 07:38:09.795418    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:09.795477    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:09.805840    8572 logs.go:276] 0 containers: []
	W0719 07:38:09.805850    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:38:09.805858    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:38:09.805864    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:38:09.817935    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:38:09.817945    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:38:09.839841    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:09.839851    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:09.865617    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:38:09.865626    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:38:09.876832    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:38:09.876843    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:38:09.891138    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:38:09.891153    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:38:09.937577    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:09.937593    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:09.977283    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:38:09.977310    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:38:09.990372    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:38:09.990385    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:38:10.012403    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:10.012414    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:10.016565    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:38:10.016575    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:38:10.031294    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:38:10.031306    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:38:10.051215    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:38:10.051225    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:38:10.066210    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:38:10.066224    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:10.077672    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:10.077687    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:12.615586    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:09.896433    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:09.896449    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:17.617965    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:17.618387    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:17.656381    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:38:17.656514    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:17.679130    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:38:17.679223    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:17.695856    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:38:17.695925    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:17.707792    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:38:17.707859    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:17.719043    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:38:17.719105    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:17.729668    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:38:17.729736    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:17.740261    8572 logs.go:276] 0 containers: []
	W0719 07:38:17.740271    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:17.740325    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:17.750408    8572 logs.go:276] 0 containers: []
	W0719 07:38:17.750418    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:38:17.750427    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:38:17.750432    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:38:17.765394    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:38:17.765405    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:38:17.777498    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:38:17.777509    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:38:17.793071    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:17.793083    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:17.828620    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:38:17.828633    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:38:17.842907    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:38:17.842917    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:38:17.880431    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:38:17.880447    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:17.891911    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:17.891925    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:17.895916    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:38:17.895922    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:38:17.909476    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:38:17.909490    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:38:17.923597    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:17.923610    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:17.947000    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:17.947007    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:17.984399    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:38:17.984405    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:38:17.996297    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:38:17.996309    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:38:18.010184    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:38:18.010196    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:38:14.898328    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:14.898369    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:20.535248    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:19.900225    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:19.900265    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:25.537705    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:25.537939    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:25.554815    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:38:25.554893    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:25.568264    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:38:25.568334    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:25.580624    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:38:25.580679    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:25.590879    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:38:25.590949    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:25.601284    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:38:25.601343    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:25.611838    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:38:25.611901    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:25.622046    8572 logs.go:276] 0 containers: []
	W0719 07:38:25.622061    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:25.622116    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:25.632878    8572 logs.go:276] 0 containers: []
	W0719 07:38:25.632890    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:38:25.632898    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:38:25.632903    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:38:25.650934    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:38:25.650944    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:38:25.662319    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:38:25.662329    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:38:25.674178    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:38:25.674189    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:38:25.695632    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:25.695642    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:25.735003    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:25.735013    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:25.739217    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:38:25.739223    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:38:25.754385    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:25.754399    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:25.790993    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:38:25.791006    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:38:25.805295    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:38:25.805306    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:38:25.816879    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:38:25.816890    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:38:25.830561    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:25.830574    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:25.855554    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:38:25.855563    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:25.869218    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:38:25.869229    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:38:25.907290    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:38:25.907307    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:38:24.902485    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:24.902696    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:24.919987    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:38:24.920067    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:24.947561    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:38:24.947630    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:24.959488    8434 logs.go:276] 2 containers: [8cc60ed45693 ed5753ac5bdd]
	I0719 07:38:24.959544    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:24.973377    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:38:24.973447    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:24.984191    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:38:24.984261    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:24.994511    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:38:24.994576    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:25.004694    8434 logs.go:276] 0 containers: []
	W0719 07:38:25.004704    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:25.004759    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:25.014957    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:38:25.014976    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:38:25.014983    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:25.029130    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:25.029141    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:25.063547    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:25.063555    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:25.068149    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:25.068156    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:25.104236    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:38:25.104246    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:38:25.119061    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:38:25.119076    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:38:25.131687    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:25.131698    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:25.156949    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:38:25.156959    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:38:25.172596    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:38:25.172607    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:38:25.186770    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:38:25.186780    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:38:25.198352    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:38:25.198362    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:38:25.209982    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:38:25.209995    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:38:25.228297    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:38:25.228307    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:38:27.742093    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:28.428193    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:32.744436    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:32.744619    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:32.755340    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:38:32.755424    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:32.765803    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:38:32.765871    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:32.776593    8434 logs.go:276] 2 containers: [8cc60ed45693 ed5753ac5bdd]
	I0719 07:38:32.776659    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:32.786979    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:38:32.787048    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:32.798028    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:38:32.798095    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:32.808534    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:38:32.808599    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:32.818571    8434 logs.go:276] 0 containers: []
	W0719 07:38:32.818582    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:32.818638    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:32.832971    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:38:32.832987    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:38:32.832993    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:38:32.844668    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:38:32.844681    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:38:32.859317    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:38:32.859329    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:38:32.871334    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:32.871345    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:32.895504    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:32.895511    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:32.929960    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:32.929969    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:32.969408    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:38:32.969417    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:38:32.983575    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:38:32.983585    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:38:33.001307    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:38:33.001317    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:38:33.012799    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:38:33.012812    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:33.024005    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:33.024016    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:33.028452    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:38:33.028461    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:38:33.042903    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:38:33.042915    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:38:33.430459    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:33.430690    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:33.451065    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:38:33.451155    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:33.471733    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:38:33.471801    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:33.487626    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:38:33.487683    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:33.498153    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:38:33.498225    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:33.508404    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:38:33.508471    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:33.523202    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:38:33.523268    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:33.534172    8572 logs.go:276] 0 containers: []
	W0719 07:38:33.534183    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:33.534241    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:33.544800    8572 logs.go:276] 0 containers: []
	W0719 07:38:33.544810    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:38:33.544819    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:38:33.544824    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:38:33.556240    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:38:33.556253    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:33.567802    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:38:33.567813    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:38:33.582281    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:38:33.582293    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:38:33.595787    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:38:33.595800    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:38:33.610533    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:38:33.610544    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:38:33.621582    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:33.621592    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:33.658446    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:38:33.658454    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:38:33.698416    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:33.698427    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:33.733763    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:38:33.733774    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:38:33.749130    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:38:33.749140    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:38:33.767723    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:33.767733    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:33.772536    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:38:33.772542    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:38:33.784296    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:38:33.784308    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:38:33.801803    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:33.801813    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:36.328371    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:35.556830    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:41.330610    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:41.330689    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:41.341357    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:38:41.341421    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:41.353278    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:38:41.353340    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:41.363900    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:38:41.363969    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:41.374417    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:38:41.374485    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:41.385235    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:38:41.385297    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:41.395356    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:38:41.395415    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:41.405565    8572 logs.go:276] 0 containers: []
	W0719 07:38:41.405580    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:41.405644    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:41.416093    8572 logs.go:276] 0 containers: []
	W0719 07:38:41.416105    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:38:41.416112    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:38:41.416118    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:38:41.453767    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:38:41.453778    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:38:41.467500    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:38:41.467513    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:38:41.479352    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:38:41.479363    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:38:41.498357    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:38:41.498366    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:38:41.512254    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:38:41.512263    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:41.523564    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:41.523575    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:41.558385    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:41.558399    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:41.562967    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:38:41.562975    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:38:41.577584    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:38:41.577594    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:38:41.593606    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:41.593621    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:41.618239    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:41.618248    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:41.656942    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:38:41.656950    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:38:41.668297    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:38:41.668313    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:38:41.680249    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:38:41.680260    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:38:40.557390    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:40.557612    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:40.580640    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:38:40.580732    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:40.598235    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:38:40.598313    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:40.614439    8434 logs.go:276] 2 containers: [8cc60ed45693 ed5753ac5bdd]
	I0719 07:38:40.614505    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:40.625209    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:38:40.625274    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:40.635850    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:38:40.635920    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:40.646298    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:38:40.646359    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:40.656465    8434 logs.go:276] 0 containers: []
	W0719 07:38:40.656476    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:40.656529    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:40.667073    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:38:40.667088    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:40.667093    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:40.702014    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:40.702025    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:40.707029    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:38:40.707039    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:38:40.721934    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:38:40.721946    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:38:40.736477    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:38:40.736489    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:38:40.747804    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:38:40.747815    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:38:40.766190    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:40.766200    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:40.790294    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:40.790304    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:40.825178    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:38:40.825193    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:38:40.839001    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:38:40.839011    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:38:40.850892    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:38:40.850905    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:38:40.862181    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:38:40.862196    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:38:40.876899    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:38:40.876910    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:43.388596    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:44.197242    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:48.389194    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:48.389415    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:48.408433    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:38:48.408525    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:48.422266    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:38:48.422333    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:49.199621    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:49.199783    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:49.214666    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:38:49.214744    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:49.227288    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:38:49.227365    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:49.238104    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:38:49.238167    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:49.248606    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:38:49.248668    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:49.258933    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:38:49.259007    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:49.271129    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:38:49.271201    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:49.281489    8572 logs.go:276] 0 containers: []
	W0719 07:38:49.281498    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:49.281555    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:49.291765    8572 logs.go:276] 0 containers: []
	W0719 07:38:49.291778    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:38:49.291787    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:38:49.291793    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:38:49.303255    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:38:49.303266    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:38:49.314632    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:38:49.314645    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:38:49.331906    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:38:49.331916    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:38:49.349267    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:49.349277    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:49.374582    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:49.374590    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:49.410808    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:38:49.410823    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:38:49.424613    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:38:49.424624    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:38:49.436287    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:38:49.436301    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:38:49.450953    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:38:49.450963    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:38:49.464695    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:38:49.464707    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:38:49.502101    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:38:49.502113    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:38:49.516661    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:49.516672    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:49.554681    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:49.554692    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:49.558916    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:38:49.558923    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:52.072116    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:48.434474    8434 logs.go:276] 2 containers: [8cc60ed45693 ed5753ac5bdd]
	I0719 07:38:48.434539    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:48.444910    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:38:48.444979    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:48.455225    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:38:48.455291    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:48.469045    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:38:48.469115    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:48.480051    8434 logs.go:276] 0 containers: []
	W0719 07:38:48.480062    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:48.480121    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:48.490819    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:38:48.490834    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:38:48.490840    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:38:48.502631    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:38:48.502642    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:38:48.514566    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:38:48.514577    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:48.526365    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:48.526376    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:48.560523    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:48.560538    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:48.565075    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:38:48.565083    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:38:48.579789    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:38:48.579800    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:38:48.594554    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:38:48.594565    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:38:48.606501    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:48.606516    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:48.640910    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:38:48.640922    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:38:48.652919    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:38:48.652932    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:38:48.667452    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:38:48.667463    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:38:48.684852    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:48.684864    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:51.211907    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:57.074284    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:57.074439    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:57.087786    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:38:57.087864    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:57.102619    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:38:57.102684    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:57.112848    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:38:57.112916    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:57.129119    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:38:57.129191    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:57.139342    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:38:57.139407    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:57.149653    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:38:57.149721    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:57.159760    8572 logs.go:276] 0 containers: []
	W0719 07:38:57.159772    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:57.159830    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:57.169985    8572 logs.go:276] 0 containers: []
	W0719 07:38:57.169997    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:38:57.170006    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:57.170013    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:57.204596    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:38:57.204608    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:38:57.216517    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:38:57.216530    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:38:57.231833    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:38:57.231842    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:38:57.252941    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:57.252955    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:57.291928    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:38:57.291935    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:38:57.306227    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:38:57.306241    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:38:57.323385    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:38:57.323398    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:38:57.337810    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:57.337824    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:57.362318    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:57.362332    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:57.366330    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:38:57.366336    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:38:57.377907    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:38:57.377920    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:57.391602    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:38:57.391614    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:38:57.430001    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:38:57.430016    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:38:57.444182    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:38:57.444197    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:38:56.214143    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:56.214394    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:56.237358    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:38:56.237455    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:56.252642    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:38:56.252724    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:56.264711    8434 logs.go:276] 2 containers: [8cc60ed45693 ed5753ac5bdd]
	I0719 07:38:56.264781    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:56.275064    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:38:56.275127    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:56.285472    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:38:56.285543    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:56.297827    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:38:56.297886    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:56.308676    8434 logs.go:276] 0 containers: []
	W0719 07:38:56.308687    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:56.308741    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:56.319519    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:38:56.319539    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:56.319545    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:56.353269    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:56.353278    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:56.387324    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:38:56.387335    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:38:56.399426    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:38:56.399440    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:38:56.411465    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:38:56.411476    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:38:56.428613    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:38:56.428623    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:38:56.443495    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:56.443506    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:56.466915    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:56.466924    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:56.471516    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:38:56.471521    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:38:56.485789    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:38:56.485799    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:38:56.500343    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:38:56.500355    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:38:56.511884    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:38:56.511894    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:38:56.525820    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:38:56.525833    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:59.960331    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:59.039937    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:04.962645    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:04.962839    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:04.978550    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:39:04.978634    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:04.990397    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:39:04.990472    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:05.000622    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:39:05.000688    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:05.011448    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:39:05.011517    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:05.026823    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:39:05.026885    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:05.037357    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:39:05.037423    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:05.052775    8572 logs.go:276] 0 containers: []
	W0719 07:39:05.052786    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:05.052840    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:05.062752    8572 logs.go:276] 0 containers: []
	W0719 07:39:05.062762    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:39:05.062772    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:39:05.062777    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:39:05.080329    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:39:05.080339    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:05.092027    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:39:05.092039    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:39:05.134851    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:39:05.134868    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:39:05.146350    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:39:05.146361    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:39:05.159715    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:05.159724    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:05.164177    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:39:05.164185    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:39:05.177997    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:39:05.178006    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:39:05.191692    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:39:05.191702    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:39:05.203387    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:39:05.203397    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:39:05.218407    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:39:05.218418    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:39:05.230836    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:05.230845    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:05.253997    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:05.254004    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:05.292557    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:39:05.292572    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:39:05.308263    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:05.308273    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:07.852806    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:04.042266    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:04.042523    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:04.075361    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:39:04.075465    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:04.091905    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:39:04.091986    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:04.105516    8434 logs.go:276] 2 containers: [8cc60ed45693 ed5753ac5bdd]
	I0719 07:39:04.105593    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:04.116933    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:39:04.116998    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:04.127526    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:39:04.127595    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:04.138080    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:39:04.138148    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:04.148207    8434 logs.go:276] 0 containers: []
	W0719 07:39:04.148235    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:04.148310    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:04.158499    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:39:04.158515    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:04.158522    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:04.193297    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:04.193306    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:04.198238    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:39:04.198244    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:39:04.209968    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:39:04.209980    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:39:04.228224    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:04.228236    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:04.252921    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:39:04.252933    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:04.265034    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:04.265045    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:04.302011    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:39:04.302022    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:39:04.317247    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:39:04.317261    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:39:04.331398    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:39:04.331411    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:39:04.343383    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:39:04.343394    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:39:04.358932    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:39:04.358942    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:39:04.374251    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:39:04.374262    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:39:06.888112    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:12.855016    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:12.855166    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:12.866296    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:39:12.866367    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:12.876707    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:39:12.876779    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:12.887015    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:39:12.887083    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:12.897552    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:39:12.897624    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:12.908132    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:39:12.908203    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:12.918450    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:39:12.918515    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:12.928212    8572 logs.go:276] 0 containers: []
	W0719 07:39:12.928224    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:12.928280    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:12.938446    8572 logs.go:276] 0 containers: []
	W0719 07:39:12.938455    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:39:12.938463    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:39:12.938468    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:39:12.956132    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:39:12.956145    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:39:12.968292    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:39:12.968303    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:39:12.979904    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:39:12.979917    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:39:12.994718    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:39:12.994727    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:39:13.015694    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:39:13.015708    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:39:13.029474    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:13.029487    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:13.033493    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:13.033499    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:13.067482    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:39:13.067496    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:39:13.081720    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:39:13.081729    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:13.093735    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:39:13.093750    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:39:13.109720    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:13.109732    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:13.132645    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:13.132657    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:13.168893    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:39:13.168900    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:39:13.182879    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:39:13.182892    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:39:11.889021    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:11.889362    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:11.924051    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:39:11.924186    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:11.945771    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:39:11.945885    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:11.960946    8434 logs.go:276] 2 containers: [8cc60ed45693 ed5753ac5bdd]
	I0719 07:39:11.961027    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:11.973251    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:39:11.973316    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:11.984950    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:39:11.985024    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:11.997539    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:39:11.997609    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:12.008017    8434 logs.go:276] 0 containers: []
	W0719 07:39:12.008030    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:12.008088    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:12.018875    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:39:12.018888    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:39:12.018894    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:39:12.033925    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:39:12.033935    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:39:12.046870    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:39:12.046880    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:39:12.065635    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:39:12.065645    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:39:12.082218    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:39:12.082228    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:39:12.102767    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:39:12.102779    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:39:12.114694    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:12.114704    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:12.150006    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:12.150016    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:12.184775    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:12.184785    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:12.209523    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:39:12.209531    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:39:12.222299    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:39:12.222309    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:12.233544    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:12.233555    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:12.238469    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:39:12.238478    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:39:15.721271    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:14.754802    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:20.723610    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:20.723740    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:20.740291    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:39:20.740364    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:20.751058    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:39:20.751124    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:20.761225    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:39:20.761289    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:20.774467    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:39:20.774532    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:20.785255    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:39:20.785323    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:20.795694    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:39:20.795751    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:20.806258    8572 logs.go:276] 0 containers: []
	W0719 07:39:20.806271    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:20.806328    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:20.815926    8572 logs.go:276] 0 containers: []
	W0719 07:39:20.815939    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:39:20.815947    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:39:20.815954    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:39:20.827489    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:39:20.827502    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:39:20.842328    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:39:20.842338    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:39:20.861099    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:39:20.861115    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:39:20.874997    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:39:20.875007    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:39:20.911560    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:39:20.911571    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:39:20.923012    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:20.923028    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:20.946603    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:39:20.946611    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:20.962258    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:20.962269    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:20.999540    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:20.999548    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:21.004054    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:39:21.004063    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:39:21.020398    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:39:21.020409    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:39:21.035530    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:39:21.035544    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:39:21.053236    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:21.053245    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:21.087868    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:39:21.087880    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:39:19.757206    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:19.757548    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:19.797264    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:39:19.797364    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:19.813403    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:39:19.813486    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:19.827057    8434 logs.go:276] 2 containers: [8cc60ed45693 ed5753ac5bdd]
	I0719 07:39:19.827130    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:19.838018    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:39:19.838089    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:19.849120    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:39:19.849185    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:19.859900    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:39:19.859969    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:19.870299    8434 logs.go:276] 0 containers: []
	W0719 07:39:19.870313    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:19.870379    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:19.882023    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:39:19.882037    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:39:19.882044    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:39:19.896955    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:39:19.896968    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:39:19.910598    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:39:19.910608    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:39:19.922894    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:39:19.922905    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:39:19.939412    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:39:19.939424    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:39:19.959260    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:19.959269    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:19.984140    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:39:19.984148    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:19.996174    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:19.996184    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:20.031021    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:20.031029    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:20.035492    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:20.035498    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:20.076332    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:39:20.076342    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:39:20.091117    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:39:20.091127    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:39:20.105565    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:39:20.105577    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:39:22.620160    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:23.606140    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:27.622401    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:27.622558    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:27.636562    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:39:27.636637    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:27.653827    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:39:27.653889    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:27.664286    8434 logs.go:276] 2 containers: [8cc60ed45693 ed5753ac5bdd]
	I0719 07:39:27.664351    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:27.674834    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:39:27.674897    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:27.685277    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:39:27.685344    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:27.700242    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:39:27.700306    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:27.710663    8434 logs.go:276] 0 containers: []
	W0719 07:39:27.710677    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:27.710726    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:27.721309    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:39:27.721325    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:27.721331    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:27.757919    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:39:27.757931    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:39:27.774956    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:39:27.774970    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:39:27.791923    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:39:27.791934    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:39:27.803498    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:39:27.803507    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:27.815008    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:39:27.815022    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:39:27.826503    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:27.826514    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:27.851300    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:27.851308    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:27.855481    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:27.855487    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:27.894930    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:39:27.894941    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:39:27.909450    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:39:27.909462    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:39:27.927078    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:39:27.927088    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:39:27.938721    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:39:27.938733    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:39:28.608434    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:28.608544    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:28.621999    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:39:28.622074    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:28.633370    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:39:28.633442    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:28.643989    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:39:28.644060    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:28.658811    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:39:28.658875    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:28.669349    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:39:28.669420    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:28.680066    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:39:28.680125    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:28.690026    8572 logs.go:276] 0 containers: []
	W0719 07:39:28.690036    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:28.690094    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:28.700213    8572 logs.go:276] 0 containers: []
	W0719 07:39:28.700226    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:39:28.700235    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:39:28.700240    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:39:28.714432    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:39:28.714443    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:39:28.729114    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:39:28.729127    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:39:28.741517    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:39:28.741529    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:39:28.755521    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:28.755532    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:28.794456    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:28.794474    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:28.798690    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:28.798698    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:28.838025    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:39:28.838036    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:39:28.881245    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:39:28.881258    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:39:28.893204    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:28.893215    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:28.916509    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:39:28.916518    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:39:28.930787    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:39:28.930800    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:28.943280    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:39:28.943294    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:39:28.954636    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:39:28.954650    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:39:28.970360    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:39:28.970371    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:39:31.494043    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:30.452639    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:36.496396    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:36.496568    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:36.510609    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:39:36.510688    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:36.521748    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:39:36.521821    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:36.532112    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:39:36.532176    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:36.543932    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:39:36.543997    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:36.561088    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:39:36.561156    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:36.571981    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:39:36.572043    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:36.581960    8572 logs.go:276] 0 containers: []
	W0719 07:39:36.581972    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:36.582031    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:36.592686    8572 logs.go:276] 0 containers: []
	W0719 07:39:36.592697    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:39:36.592704    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:36.592711    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:36.629438    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:36.629446    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:36.664512    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:39:36.664523    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:39:36.679681    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:39:36.679694    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:39:36.720006    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:39:36.720018    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:36.731915    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:39:36.731926    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:39:36.748889    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:36.748900    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:36.753019    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:39:36.753027    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:39:36.766552    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:39:36.766561    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:39:36.777445    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:39:36.777456    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:39:36.793330    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:39:36.793344    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:39:36.807815    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:36.807833    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:36.833903    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:39:36.833919    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:39:36.850205    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:39:36.850219    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:39:36.863533    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:39:36.863547    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:39:35.454999    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:35.455256    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:35.484482    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:39:35.484604    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:35.502282    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:39:35.502367    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:35.516601    8434 logs.go:276] 2 containers: [8cc60ed45693 ed5753ac5bdd]
	I0719 07:39:35.516679    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:35.527875    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:39:35.527940    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:35.538473    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:39:35.538538    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:35.548864    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:39:35.548927    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:35.559280    8434 logs.go:276] 0 containers: []
	W0719 07:39:35.559291    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:35.559359    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:35.574005    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:39:35.574021    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:39:35.574026    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:39:35.591600    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:39:35.591611    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:39:35.603209    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:39:35.603222    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:35.615197    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:35.615208    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:35.650412    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:39:35.650423    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:39:35.662461    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:39:35.662471    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:39:35.673983    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:39:35.673994    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:39:35.687747    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:39:35.687757    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:39:35.700141    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:39:35.700151    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:39:35.714469    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:35.714479    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:35.739314    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:35.739321    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:35.773655    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:35.773663    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:35.778072    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:39:35.778078    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:39:38.300197    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:39.383590    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:43.302518    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:43.302873    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:43.340636    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:39:43.340767    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:43.361058    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:39:43.361145    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:43.376104    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:39:43.376184    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:43.390178    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:39:43.390249    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:43.404279    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:39:43.404353    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:43.414885    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:39:43.414957    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:43.425488    8434 logs.go:276] 0 containers: []
	W0719 07:39:43.425504    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:43.425568    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:44.385926    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:44.386116    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:44.406377    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:39:44.406486    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:44.420732    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:39:44.420810    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:44.432045    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:39:44.432109    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:44.442610    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:39:44.442678    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:44.453071    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:39:44.453136    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:44.463532    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:39:44.463602    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:44.473495    8572 logs.go:276] 0 containers: []
	W0719 07:39:44.473509    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:44.473569    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:44.483831    8572 logs.go:276] 0 containers: []
	W0719 07:39:44.483842    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:39:44.483850    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:39:44.483856    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:39:44.496089    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:39:44.496099    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:39:44.511284    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:39:44.511294    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:39:44.525939    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:39:44.525949    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:39:44.539644    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:39:44.539656    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:39:44.554413    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:39:44.554426    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:39:44.574252    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:39:44.574262    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:44.586522    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:44.586534    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:44.591516    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:39:44.591523    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:39:44.606388    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:39:44.606400    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:39:44.647648    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:39:44.647660    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:39:44.664467    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:44.664480    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:44.699047    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:39:44.699058    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:39:44.713249    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:44.713264    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:44.751907    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:44.751914    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:47.277467    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:43.436652    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:39:43.436668    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:43.436673    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:43.441138    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:39:43.441146    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:39:43.452740    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:43.452751    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:43.488136    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:39:43.488144    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:39:43.500196    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:39:43.500208    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:39:43.512037    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:39:43.512048    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:39:43.527593    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:39:43.527603    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:39:43.541310    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:39:43.541321    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:39:43.552866    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:39:43.552876    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:43.564612    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:43.564623    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:43.602333    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:39:43.602343    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:39:43.613443    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:39:43.613457    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:39:43.627843    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:39:43.627852    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:39:43.648810    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:39:43.648820    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:39:43.660813    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:43.660824    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:46.186258    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:52.280044    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:52.280277    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:52.310804    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:39:52.310926    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:52.326788    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:39:52.326868    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:52.339324    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:39:52.339385    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:52.350975    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:39:52.351046    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:52.361794    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:39:52.361855    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:52.373006    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:39:52.373072    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:52.383211    8572 logs.go:276] 0 containers: []
	W0719 07:39:52.383222    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:52.383306    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:52.398650    8572 logs.go:276] 0 containers: []
	W0719 07:39:52.398662    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:39:52.398670    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:52.398676    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:52.403417    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:39:52.403423    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:39:52.417622    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:39:52.417633    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:39:52.428502    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:39:52.428514    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:39:52.440326    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:52.440337    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:52.480185    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:39:52.480197    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:39:52.521479    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:39:52.521489    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:39:52.535331    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:39:52.535344    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:39:52.551079    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:39:52.551091    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:39:52.564800    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:52.564811    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:52.588033    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:39:52.588045    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:52.600321    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:52.600332    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:52.638689    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:39:52.638704    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:39:52.653646    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:39:52.653658    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:39:52.665349    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:39:52.665359    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:39:51.188512    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:51.188736    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:51.211249    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:39:51.211371    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:51.226364    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:39:51.226446    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:51.238899    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:39:51.238975    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:51.252022    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:39:51.252091    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:51.263077    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:39:51.263142    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:51.277719    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:39:51.277792    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:51.288423    8434 logs.go:276] 0 containers: []
	W0719 07:39:51.288433    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:51.288487    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:51.299683    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:39:51.299699    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:51.299705    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:51.325053    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:39:51.325061    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:39:51.336211    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:39:51.336222    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:39:51.350883    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:39:51.350895    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:39:51.362918    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:39:51.362929    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:51.375935    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:51.375948    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:51.410953    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:39:51.410962    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:39:51.429379    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:39:51.429392    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:39:51.441459    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:39:51.441471    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:39:51.455650    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:39:51.455664    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:39:51.467844    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:39:51.467858    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:39:51.479228    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:39:51.479238    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:39:51.491382    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:39:51.491392    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:39:51.508389    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:51.508399    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:51.513009    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:51.513014    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:55.184354    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:54.047811    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:00.186704    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:00.186876    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:00.205492    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:40:00.205588    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:00.219975    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:40:00.220048    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:00.232279    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:40:00.232350    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:00.245164    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:40:00.245228    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:00.255803    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:40:00.255877    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:00.266574    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:40:00.266636    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:00.276536    8572 logs.go:276] 0 containers: []
	W0719 07:40:00.276547    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:00.276606    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:00.286488    8572 logs.go:276] 0 containers: []
	W0719 07:40:00.286499    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:40:00.286508    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:40:00.286516    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:40:00.304329    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:40:00.304341    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:40:00.318555    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:40:00.318565    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:40:00.330024    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:00.330036    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:00.367818    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:40:00.367827    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:40:00.379348    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:00.379360    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:00.383600    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:40:00.383609    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:40:00.399048    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:40:00.399063    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:40:00.437186    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:40:00.437199    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:40:00.451529    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:40:00.451547    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:40:00.465759    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:40:00.465772    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:40:00.477454    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:00.477463    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:00.500147    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:40:00.500154    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:00.511466    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:00.511478    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:00.548138    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:40:00.548146    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:40:03.066582    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:59.050074    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:59.050284    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:59.065066    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:39:59.065145    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:59.076726    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:39:59.076801    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:59.088901    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:39:59.088981    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:59.099381    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:39:59.099450    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:59.110204    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:39:59.110271    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:59.121027    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:39:59.121097    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:59.131209    8434 logs.go:276] 0 containers: []
	W0719 07:39:59.131225    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:59.131282    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:59.142209    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:39:59.142229    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:59.142236    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:59.146865    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:59.146874    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:59.185742    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:39:59.185756    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:39:59.201297    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:39:59.201310    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:39:59.216107    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:39:59.216117    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:39:59.227846    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:59.227857    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:59.252768    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:59.252777    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:59.287005    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:39:59.287015    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:39:59.298418    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:39:59.298428    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:39:59.310279    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:39:59.310289    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:39:59.322070    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:39:59.322080    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:39:59.334164    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:39:59.334176    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:39:59.348284    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:39:59.348295    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:39:59.362363    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:39:59.362372    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:39:59.379633    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:39:59.379643    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:01.891852    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:08.068842    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:08.068991    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:08.084930    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:40:08.085005    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:08.098318    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:40:08.098390    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:08.109338    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:40:08.109408    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:08.123603    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:40:08.123672    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:08.134938    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:40:08.135005    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:08.146248    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:40:08.146317    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:08.156841    8572 logs.go:276] 0 containers: []
	W0719 07:40:08.156851    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:08.156906    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:08.167031    8572 logs.go:276] 0 containers: []
	W0719 07:40:08.167042    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:40:08.167049    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:40:08.167055    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:08.179653    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:40:08.179664    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:40:08.193951    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:40:08.193961    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:40:08.205447    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:40:08.205458    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:40:08.223347    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:08.223357    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:08.227632    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:40:08.227640    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:40:08.240142    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:40:08.240151    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:40:08.257910    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:40:08.257921    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:40:08.271741    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:40:08.271751    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:40:06.892599    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:06.892895    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:06.921605    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:40:06.921754    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:06.940707    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:40:06.940800    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:06.962385    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:40:06.962448    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:06.974282    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:40:06.974351    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:06.985198    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:40:06.985264    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:06.996461    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:40:06.996529    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:07.006389    8434 logs.go:276] 0 containers: []
	W0719 07:40:07.006401    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:07.006455    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:07.016780    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:40:07.016799    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:07.016803    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:07.053346    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:40:07.053356    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:40:07.067850    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:40:07.067862    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:40:07.079519    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:40:07.079530    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:40:07.109533    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:07.109543    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:07.145360    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:40:07.145371    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:40:07.159979    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:40:07.159993    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:40:07.171610    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:40:07.171620    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:40:07.183459    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:07.183468    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:07.188000    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:40:07.188006    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:40:07.199830    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:40:07.199841    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:40:07.214457    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:07.214471    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:07.238862    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:40:07.238869    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:40:07.250549    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:40:07.250563    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:40:07.262558    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:40:07.262567    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:08.308660    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:40:08.308673    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:40:08.322885    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:40:08.322899    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:40:08.337946    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:08.337955    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:08.362400    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:08.362416    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:08.401394    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:08.401404    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:08.436207    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:40:08.436218    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:40:10.947820    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:09.776646    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:15.950046    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:15.950189    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:15.967002    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:40:15.967085    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:15.979711    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:40:15.979780    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:15.991662    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:40:15.991722    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:16.002860    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:40:16.002924    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:16.014269    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:40:16.014326    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:16.025262    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:40:16.025353    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:16.035838    8572 logs.go:276] 0 containers: []
	W0719 07:40:16.035847    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:16.035898    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:16.046170    8572 logs.go:276] 0 containers: []
	W0719 07:40:16.046183    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:40:16.046191    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:40:16.046197    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:40:16.057914    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:40:16.057925    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:40:16.069928    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:40:16.069940    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:40:16.083575    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:16.083585    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:16.107576    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:40:16.107587    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:16.120324    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:40:16.120338    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:40:16.158751    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:40:16.158762    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:40:16.169947    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:16.169958    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:16.208983    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:16.208994    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:16.213353    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:16.213362    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:16.247526    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:40:16.247540    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:40:16.266872    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:40:16.266882    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:40:16.281490    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:40:16.281500    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:40:16.295383    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:40:16.295396    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:40:16.311418    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:40:16.311429    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:40:14.779143    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:14.779246    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:14.793470    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:40:14.793550    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:14.805509    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:40:14.805578    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:14.818778    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:40:14.818851    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:14.829058    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:40:14.829128    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:14.839376    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:40:14.839441    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:14.850029    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:40:14.850100    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:14.860342    8434 logs.go:276] 0 containers: []
	W0719 07:40:14.860356    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:14.860409    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:14.874700    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:40:14.874718    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:40:14.874725    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:14.886193    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:14.886205    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:14.920254    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:40:14.920264    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:40:14.934058    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:40:14.934068    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:40:14.946125    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:40:14.946136    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:40:14.958044    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:40:14.958053    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:40:14.970932    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:40:14.970943    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:40:14.985881    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:40:14.985893    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:40:14.998029    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:14.998038    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:15.021647    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:15.021655    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:15.026503    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:40:15.026512    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:40:15.041238    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:40:15.041248    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:40:15.052744    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:15.052753    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:15.085870    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:40:15.085883    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:40:15.100808    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:40:15.100819    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:40:17.620013    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:18.828194    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:22.622296    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:22.622468    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:22.636926    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:40:22.637005    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:22.648956    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:40:22.649026    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:22.659809    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:40:22.659885    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:22.670241    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:40:22.670309    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:22.680128    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:40:22.680195    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:22.690108    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:40:22.690179    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:22.700247    8434 logs.go:276] 0 containers: []
	W0719 07:40:22.700260    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:22.700311    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:22.710342    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:40:22.710359    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:40:22.710364    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:40:22.721511    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:22.721524    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:22.747122    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:22.747137    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:22.809718    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:40:22.809731    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:40:22.824730    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:40:22.824740    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:40:22.836656    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:40:22.836667    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:40:22.851823    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:40:22.851834    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:40:22.869163    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:22.869172    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:22.873635    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:40:22.873645    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:40:22.890749    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:40:22.890759    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:40:22.902888    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:40:22.902902    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:22.916622    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:22.916633    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:22.950085    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:40:22.950096    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:40:22.964461    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:40:22.964471    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:40:22.976633    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:40:22.976643    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:40:23.830460    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:23.830768    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:23.864046    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:40:23.864133    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:23.879846    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:40:23.879912    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:23.892214    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:40:23.892270    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:23.903631    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:40:23.903691    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:23.914427    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:40:23.914479    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:23.924788    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:40:23.924847    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:23.935271    8572 logs.go:276] 0 containers: []
	W0719 07:40:23.935281    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:23.935326    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:23.945509    8572 logs.go:276] 0 containers: []
	W0719 07:40:23.945522    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:40:23.945529    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:23.945535    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:23.949858    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:23.949867    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:23.984510    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:40:23.984524    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:40:23.998966    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:40:23.998979    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:40:24.010302    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:40:24.010311    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:40:24.022028    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:40:24.022038    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:40:24.037485    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:40:24.037498    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:40:24.055442    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:40:24.055452    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:40:24.069020    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:40:24.069033    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:40:24.084170    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:40:24.084184    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:40:24.124078    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:40:24.124089    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:40:24.136185    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:24.136196    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:24.175866    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:40:24.175874    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:40:24.190321    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:24.190331    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:24.212493    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:40:24.212501    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:26.726172    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:25.490712    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:31.728657    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:31.729067    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:31.772904    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:40:31.773009    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:31.788776    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:40:31.788855    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:31.802054    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:40:31.802127    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:31.817847    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:40:31.817917    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:31.828294    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:40:31.828370    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:31.839943    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:40:31.840008    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:31.850894    8572 logs.go:276] 0 containers: []
	W0719 07:40:31.850904    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:31.850968    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:31.861225    8572 logs.go:276] 0 containers: []
	W0719 07:40:31.861237    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:40:31.861244    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:40:31.861250    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:31.875251    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:40:31.875266    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:40:31.890119    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:31.890128    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:31.913609    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:40:31.913619    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:40:31.924846    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:31.924855    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:31.929241    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:40:31.929250    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:40:31.940398    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:40:31.940412    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:40:31.981849    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:40:31.981861    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:40:31.996474    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:40:31.996483    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:40:32.008897    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:40:32.008909    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:40:32.023846    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:32.023860    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:32.061351    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:32.061360    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:32.096455    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:40:32.096469    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:40:32.113859    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:40:32.113871    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:40:32.128723    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:40:32.128737    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:40:30.493476    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:30.493855    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:30.526286    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:40:30.526411    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:30.546447    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:40:30.546540    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:30.564059    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:40:30.564131    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:30.585076    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:40:30.585144    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:30.595731    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:40:30.595804    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:30.606583    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:40:30.606650    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:30.617209    8434 logs.go:276] 0 containers: []
	W0719 07:40:30.617224    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:30.617277    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:30.627810    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:40:30.627827    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:40:30.627833    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:40:30.645516    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:30.645526    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:30.670593    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:40:30.670602    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:40:30.682325    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:40:30.682339    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:40:30.697301    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:40:30.697314    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:40:30.711036    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:40:30.711047    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:40:30.722106    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:40:30.722116    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:40:30.737514    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:30.737525    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:30.742465    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:40:30.742471    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:40:30.754504    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:40:30.754517    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:40:30.766215    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:30.766226    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:30.799538    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:40:30.799547    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:40:30.813758    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:40:30.813771    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:40:30.828255    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:40:30.828266    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:30.841670    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:30.841679    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:33.379152    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:34.647940    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:38.381454    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:38.381634    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:38.396912    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:40:38.396993    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:38.409129    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:40:38.409197    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:38.423537    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:40:38.423621    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:39.650233    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:39.650460    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:39.680439    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:40:39.680549    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:39.698322    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:40:39.698394    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:39.712699    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:40:39.712761    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:39.724666    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:40:39.724737    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:39.735948    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:40:39.736011    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:39.747081    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:40:39.747163    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:39.757884    8572 logs.go:276] 0 containers: []
	W0719 07:40:39.757896    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:39.757959    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:39.768079    8572 logs.go:276] 0 containers: []
	W0719 07:40:39.768092    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:40:39.768100    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:39.768106    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:39.807798    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:39.807808    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:39.812405    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:40:39.812413    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:40:39.824964    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:40:39.824976    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:40:39.839017    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:40:39.839028    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:40:39.853604    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:40:39.853614    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:40:39.866402    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:39.866412    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:39.891212    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:39.891226    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:39.955141    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:40:39.955159    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:40:39.996265    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:40:39.996282    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:40:40.011325    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:40:40.011335    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:40:40.022625    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:40:40.022636    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:40:40.040484    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:40:40.040493    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:40.052122    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:40:40.052138    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:40:40.066561    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:40:40.066570    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:40:42.582562    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:38.434250    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:40:38.434313    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:38.444909    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:40:38.444974    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:38.456360    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:40:38.456425    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:38.466758    8434 logs.go:276] 0 containers: []
	W0719 07:40:38.466769    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:38.466821    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:38.477533    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:40:38.477549    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:40:38.477554    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:40:38.492988    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:40:38.493005    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:40:38.510468    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:40:38.510482    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:40:38.523490    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:38.523499    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:38.528055    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:38.528063    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:38.563019    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:40:38.563032    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:40:38.577525    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:40:38.577537    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:40:38.589558    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:38.589572    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:38.614282    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:40:38.614289    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:40:38.628069    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:40:38.628079    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:40:38.643549    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:40:38.643558    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:40:38.659029    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:40:38.659039    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:40:38.670982    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:38.670992    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:38.704209    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:40:38.704219    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:40:38.718914    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:40:38.718927    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:41.232679    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:47.584906    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:47.585083    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:47.605952    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:40:47.606046    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:47.621561    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:40:47.621637    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:47.633486    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:40:47.633555    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:47.644016    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:40:47.644086    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:47.654829    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:40:47.654890    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:47.665019    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:40:47.665079    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:47.675340    8572 logs.go:276] 0 containers: []
	W0719 07:40:47.675354    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:47.675410    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:47.686291    8572 logs.go:276] 0 containers: []
	W0719 07:40:47.686304    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:40:47.686312    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:47.686319    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:47.708709    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:40:47.708716    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:47.720211    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:40:47.720221    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:40:47.740741    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:40:47.740752    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:40:47.752172    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:40:47.752185    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:40:47.764337    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:40:47.764348    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:40:47.777994    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:47.778007    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:47.814984    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:47.814992    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:47.819175    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:47.819184    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:47.858706    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:40:47.858715    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:40:47.876879    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:40:47.876889    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:40:47.894857    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:40:47.894868    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:40:47.907299    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:40:47.907313    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:40:47.945053    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:40:47.945064    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:40:47.960168    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:40:47.960180    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:40:46.235207    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:46.235713    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:46.273545    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:40:46.273684    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:46.295093    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:40:46.295181    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:46.312382    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:40:46.312458    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:46.324828    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:40:46.324896    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:46.337855    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:40:46.337922    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:46.350363    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:40:46.350437    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:46.361445    8434 logs.go:276] 0 containers: []
	W0719 07:40:46.361455    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:46.361511    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:46.372654    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:40:46.372673    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:46.372678    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:46.408044    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:40:46.408054    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:40:46.421301    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:40:46.421311    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:40:46.433718    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:40:46.433728    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:40:46.448682    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:46.448692    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:46.482677    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:40:46.482688    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:40:46.497261    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:40:46.497274    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:40:46.514669    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:40:46.514683    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:40:46.529325    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:46.529335    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:46.557394    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:40:46.557405    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:46.568941    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:46.568956    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:46.573279    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:40:46.573287    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:40:46.586660    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:40:46.586674    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:40:46.598751    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:40:46.598760    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:40:46.616787    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:40:46.616801    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:40:50.483599    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:49.130806    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:55.485907    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:55.485997    8572 kubeadm.go:597] duration metric: took 4m3.335150958s to restartPrimaryControlPlane
	W0719 07:40:55.486062    8572 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 07:40:55.486098    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0719 07:40:56.411403    8572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 07:40:56.416373    8572 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 07:40:56.419000    8572 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 07:40:56.421753    8572 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 07:40:56.421759    8572 kubeadm.go:157] found existing configuration files:
	
	I0719 07:40:56.421785    8572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/admin.conf
	I0719 07:40:56.424230    8572 kubeadm.go:163] "https://control-plane.minikube.internal:51405" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 07:40:56.424261    8572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 07:40:56.426925    8572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/kubelet.conf
	I0719 07:40:56.430160    8572 kubeadm.go:163] "https://control-plane.minikube.internal:51405" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 07:40:56.430188    8572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 07:40:56.433034    8572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/controller-manager.conf
	I0719 07:40:56.435399    8572 kubeadm.go:163] "https://control-plane.minikube.internal:51405" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 07:40:56.435416    8572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 07:40:56.438425    8572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/scheduler.conf
	I0719 07:40:56.441609    8572 kubeadm.go:163] "https://control-plane.minikube.internal:51405" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 07:40:56.441628    8572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 07:40:56.444166    8572 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 07:40:56.459842    8572 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0719 07:40:56.460022    8572 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 07:40:56.507913    8572 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 07:40:56.507997    8572 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 07:40:56.508117    8572 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 07:40:56.562302    8572 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 07:40:56.568454    8572 out.go:204]   - Generating certificates and keys ...
	I0719 07:40:56.568490    8572 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 07:40:56.568522    8572 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 07:40:56.568572    8572 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 07:40:56.568602    8572 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 07:40:56.568635    8572 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 07:40:56.568659    8572 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 07:40:56.568690    8572 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 07:40:56.568720    8572 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 07:40:56.568772    8572 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 07:40:56.568811    8572 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 07:40:56.568834    8572 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 07:40:56.568865    8572 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 07:40:56.617960    8572 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 07:40:56.727489    8572 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 07:40:56.930677    8572 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 07:40:57.024793    8572 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 07:40:57.056485    8572 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 07:40:57.056896    8572 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 07:40:57.056920    8572 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 07:40:57.124274    8572 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 07:40:57.127621    8572 out.go:204]   - Booting up control plane ...
	I0719 07:40:57.127670    8572 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 07:40:57.127721    8572 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 07:40:57.127832    8572 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 07:40:57.137382    8572 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 07:40:57.138045    8572 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 07:40:54.132918    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:54.133110    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:54.151097    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:40:54.151189    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:54.164402    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:40:54.164464    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:54.176201    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:40:54.176272    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:54.186363    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:40:54.186434    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:54.196957    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:40:54.197018    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:54.207719    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:40:54.207787    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:54.217772    8434 logs.go:276] 0 containers: []
	W0719 07:40:54.217781    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:54.217830    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:54.228057    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:40:54.228082    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:40:54.228087    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:40:54.246250    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:40:54.246261    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:40:54.258014    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:54.258024    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:54.282541    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:54.282549    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:54.286841    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:54.286847    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:54.323533    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:40:54.323546    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:40:54.338732    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:40:54.338744    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:40:54.353901    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:40:54.353911    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:40:54.365502    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:54.365512    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:54.398371    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:40:54.398382    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:40:54.409589    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:40:54.409599    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:40:54.421182    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:40:54.421191    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:40:54.432394    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:40:54.432408    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:54.444548    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:40:54.444564    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:40:54.455904    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:40:54.455915    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:40:56.975700    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:01.640121    8572 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501626 seconds
	I0719 07:41:01.640350    8572 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 07:41:01.644563    8572 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 07:41:02.153625    8572 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 07:41:02.153871    8572 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-109000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 07:41:02.657809    8572 kubeadm.go:310] [bootstrap-token] Using token: 9qmjl0.4axsfkhx88jmp3qy
	I0719 07:41:02.670045    8572 out.go:204]   - Configuring RBAC rules ...
	I0719 07:41:02.670127    8572 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 07:41:02.670178    8572 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 07:41:02.670804    8572 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 07:41:02.671686    8572 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 07:41:02.672747    8572 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 07:41:02.673533    8572 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 07:41:02.677752    8572 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 07:41:02.836215    8572 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 07:41:03.061224    8572 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 07:41:03.061651    8572 kubeadm.go:310] 
	I0719 07:41:03.061683    8572 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 07:41:03.061688    8572 kubeadm.go:310] 
	I0719 07:41:03.061730    8572 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 07:41:03.061740    8572 kubeadm.go:310] 
	I0719 07:41:03.061756    8572 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 07:41:03.061782    8572 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 07:41:03.061828    8572 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 07:41:03.061834    8572 kubeadm.go:310] 
	I0719 07:41:03.061858    8572 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 07:41:03.061862    8572 kubeadm.go:310] 
	I0719 07:41:03.061883    8572 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 07:41:03.061886    8572 kubeadm.go:310] 
	I0719 07:41:03.061911    8572 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 07:41:03.061946    8572 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 07:41:03.061982    8572 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 07:41:03.061989    8572 kubeadm.go:310] 
	I0719 07:41:03.062036    8572 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 07:41:03.062077    8572 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 07:41:03.062080    8572 kubeadm.go:310] 
	I0719 07:41:03.062122    8572 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9qmjl0.4axsfkhx88jmp3qy \
	I0719 07:41:03.062175    8572 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c0079416ee672a46ea5c9a53cd13d3e504fe5042c2b22c9e2bf67c89ce7740e7 \
	I0719 07:41:03.062187    8572 kubeadm.go:310] 	--control-plane 
	I0719 07:41:03.062189    8572 kubeadm.go:310] 
	I0719 07:41:03.062230    8572 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 07:41:03.062235    8572 kubeadm.go:310] 
	I0719 07:41:03.062277    8572 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9qmjl0.4axsfkhx88jmp3qy \
	I0719 07:41:03.062341    8572 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c0079416ee672a46ea5c9a53cd13d3e504fe5042c2b22c9e2bf67c89ce7740e7 
	I0719 07:41:03.062616    8572 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 07:41:03.062626    8572 cni.go:84] Creating CNI manager for ""
	I0719 07:41:03.062635    8572 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:41:03.070161    8572 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 07:41:03.074264    8572 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 07:41:03.077246    8572 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 07:41:03.082132    8572 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 07:41:03.082207    8572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-109000 minikube.k8s.io/updated_at=2024_07_19T07_41_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de minikube.k8s.io/name=stopped-upgrade-109000 minikube.k8s.io/primary=true
	I0719 07:41:03.082207    8572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 07:41:03.113627    8572 kubeadm.go:1113] duration metric: took 31.457541ms to wait for elevateKubeSystemPrivileges
	I0719 07:41:03.126417    8572 ops.go:34] apiserver oom_adj: -16
	I0719 07:41:03.126431    8572 kubeadm.go:394] duration metric: took 4m10.989210041s to StartCluster
	I0719 07:41:03.126445    8572 settings.go:142] acquiring lock: {Name:mk67df71d562cbffe9f3bde88489898c395cdfc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:41:03.126537    8572 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:41:03.126934    8572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/kubeconfig: {Name:mk0c17b3830610cdae4c834f6bae9631cabc7388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:41:03.127143    8572 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:41:03.127157    8572 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 07:41:03.127191    8572 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-109000"
	I0719 07:41:03.127207    8572 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-109000"
	W0719 07:41:03.127210    8572 addons.go:243] addon storage-provisioner should already be in state true
	I0719 07:41:03.127224    8572 host.go:66] Checking if "stopped-upgrade-109000" exists ...
	I0719 07:41:03.127225    8572 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-109000"
	I0719 07:41:03.127241    8572 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-109000"
	I0719 07:41:03.127224    8572 config.go:182] Loaded profile config "stopped-upgrade-109000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:41:03.131226    8572 out.go:177] * Verifying Kubernetes components...
	I0719 07:41:03.131888    8572 kapi.go:59] client config for stopped-upgrade-109000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-5980/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101fd7790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 07:41:03.135531    8572 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-109000"
	W0719 07:41:03.135536    8572 addons.go:243] addon default-storageclass should already be in state true
	I0719 07:41:03.135545    8572 host.go:66] Checking if "stopped-upgrade-109000" exists ...
	I0719 07:41:03.136083    8572 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 07:41:03.136090    8572 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 07:41:03.136095    8572 sshutil.go:53] new ssh client: &{IP:localhost Port:51371 SSHKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/stopped-upgrade-109000/id_rsa Username:docker}
	I0719 07:41:03.139180    8572 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 07:41:03.143206    8572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 07:41:03.144378    8572 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 07:41:03.144383    8572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 07:41:03.144387    8572 sshutil.go:53] new ssh client: &{IP:localhost Port:51371 SSHKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/stopped-upgrade-109000/id_rsa Username:docker}
	I0719 07:41:03.220988    8572 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 07:41:03.226145    8572 api_server.go:52] waiting for apiserver process to appear ...
	I0719 07:41:03.226186    8572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 07:41:03.232157    8572 api_server.go:72] duration metric: took 105.0045ms to wait for apiserver process to appear ...
	I0719 07:41:03.232165    8572 api_server.go:88] waiting for apiserver healthz status ...
	I0719 07:41:03.232173    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:03.252036    8572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 07:41:01.977962    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:01.978209    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:41:01.991774    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:41:01.991856    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:41:02.004776    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:41:02.004846    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:41:02.022234    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:41:02.022309    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:41:02.033653    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:41:02.033714    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:41:02.056377    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:41:02.056446    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:41:02.075545    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:41:02.075626    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:41:02.091214    8434 logs.go:276] 0 containers: []
	W0719 07:41:02.091226    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:41:02.091287    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:41:02.105511    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:41:02.105530    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:41:02.105535    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:41:02.141443    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:41:02.141457    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:41:02.153453    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:41:02.153467    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:41:02.168697    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:41:02.168707    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:41:02.180618    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:41:02.180627    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:41:02.215481    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:41:02.215488    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:41:02.219961    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:41:02.219969    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:41:02.231124    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:41:02.231139    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:41:02.246817    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:41:02.246827    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:41:02.265164    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:41:02.265174    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:41:02.276714    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:41:02.276724    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:41:02.291070    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:41:02.291079    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:41:02.302735    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:41:02.302746    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:41:02.314380    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:41:02.314391    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:41:02.332160    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:41:02.332169    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:41:03.309652    8572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 07:41:08.234333    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:08.234379    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:04.857668    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:13.234758    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:13.234782    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:09.859222    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:09.859468    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:41:09.884917    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:41:09.885033    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:41:09.901651    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:41:09.901738    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:41:09.915237    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:41:09.915303    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:41:09.929628    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:41:09.929699    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:41:09.941343    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:41:09.941410    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:41:09.951951    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:41:09.952019    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:41:09.961628    8434 logs.go:276] 0 containers: []
	W0719 07:41:09.961638    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:41:09.961695    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:41:09.972291    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:41:09.972307    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:41:09.972312    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:41:09.976750    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:41:09.976759    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:41:10.010903    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:41:10.010918    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:41:10.025262    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:41:10.025277    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:41:10.042847    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:41:10.042860    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:41:10.057997    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:41:10.058010    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:41:10.082979    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:41:10.082986    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:41:10.116556    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:41:10.116566    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:41:10.130897    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:41:10.130907    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:41:10.143094    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:41:10.143108    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:41:10.155274    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:41:10.155287    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:41:10.172547    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:41:10.172557    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:41:10.184132    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:41:10.184145    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:41:10.195296    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:41:10.195309    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:41:10.207126    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:41:10.207136    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:41:12.722565    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:18.235107    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:18.235131    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:17.724782    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:17.724925    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:41:17.736959    8434 logs.go:276] 1 containers: [7352fbd733e7]
	I0719 07:41:17.737043    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:41:17.749904    8434 logs.go:276] 1 containers: [6ee3dcc8f373]
	I0719 07:41:17.749970    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:41:17.762100    8434 logs.go:276] 4 containers: [7fd9175f811d d1257897620b 8cc60ed45693 ed5753ac5bdd]
	I0719 07:41:17.762178    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:41:17.781900    8434 logs.go:276] 1 containers: [d10bbe034fb3]
	I0719 07:41:17.781976    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:41:17.793669    8434 logs.go:276] 1 containers: [e6e962b4d57e]
	I0719 07:41:17.793739    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:41:17.805531    8434 logs.go:276] 1 containers: [add7facb14bf]
	I0719 07:41:17.805605    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:41:17.816873    8434 logs.go:276] 0 containers: []
	W0719 07:41:17.816886    8434 logs.go:278] No container was found matching "kindnet"
	I0719 07:41:17.816948    8434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:41:17.827885    8434 logs.go:276] 1 containers: [d7cc7f846035]
	I0719 07:41:17.827903    8434 logs.go:123] Gathering logs for dmesg ...
	I0719 07:41:17.827912    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:41:17.832830    8434 logs.go:123] Gathering logs for coredns [7fd9175f811d] ...
	I0719 07:41:17.832842    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd9175f811d"
	I0719 07:41:17.845939    8434 logs.go:123] Gathering logs for coredns [d1257897620b] ...
	I0719 07:41:17.845951    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1257897620b"
	I0719 07:41:17.862616    8434 logs.go:123] Gathering logs for coredns [ed5753ac5bdd] ...
	I0719 07:41:17.862627    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed5753ac5bdd"
	I0719 07:41:17.876038    8434 logs.go:123] Gathering logs for storage-provisioner [d7cc7f846035] ...
	I0719 07:41:17.876052    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7cc7f846035"
	I0719 07:41:17.888984    8434 logs.go:123] Gathering logs for container status ...
	I0719 07:41:17.888996    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:41:17.901503    8434 logs.go:123] Gathering logs for kubelet ...
	I0719 07:41:17.901514    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:41:17.937551    8434 logs.go:123] Gathering logs for kube-apiserver [7352fbd733e7] ...
	I0719 07:41:17.937568    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7352fbd733e7"
	I0719 07:41:17.954115    8434 logs.go:123] Gathering logs for coredns [8cc60ed45693] ...
	I0719 07:41:17.954127    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cc60ed45693"
	I0719 07:41:17.968066    8434 logs.go:123] Gathering logs for kube-scheduler [d10bbe034fb3] ...
	I0719 07:41:17.968080    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d10bbe034fb3"
	I0719 07:41:17.984708    8434 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:41:17.984722    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:41:18.025133    8434 logs.go:123] Gathering logs for kube-proxy [e6e962b4d57e] ...
	I0719 07:41:18.025146    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6e962b4d57e"
	I0719 07:41:18.039048    8434 logs.go:123] Gathering logs for kube-controller-manager [add7facb14bf] ...
	I0719 07:41:18.039061    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 add7facb14bf"
	I0719 07:41:18.060193    8434 logs.go:123] Gathering logs for Docker ...
	I0719 07:41:18.060207    8434 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:41:18.087209    8434 logs.go:123] Gathering logs for etcd [6ee3dcc8f373] ...
	I0719 07:41:18.087235    8434 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ee3dcc8f373"
	I0719 07:41:23.235545    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:23.235590    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:20.606006    8434 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:25.608367    8434 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:25.613085    8434 out.go:177] 
	W0719 07:41:25.617913    8434 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0719 07:41:25.617931    8434 out.go:239] * 
	W0719 07:41:25.619236    8434 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:41:25.633920    8434 out.go:177] 
	I0719 07:41:28.236712    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:28.236759    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:33.237680    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:33.237733    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0719 07:41:33.631398    8572 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0719 07:41:33.635475    8572 out.go:177] * Enabled addons: storage-provisioner
	I0719 07:41:33.643463    8572 addons.go:510] duration metric: took 30.516593s for enable addons: enabled=[storage-provisioner]
	I0719 07:41:38.238923    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:38.238976    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Fri 2024-07-19 14:32:35 UTC, ends at Fri 2024-07-19 14:41:41 UTC. --
	Jul 19 14:41:26 running-upgrade-059000 dockerd[2891]: time="2024-07-19T14:41:26.919897611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 14:41:26 running-upgrade-059000 dockerd[2891]: time="2024-07-19T14:41:26.919980728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 14:41:26 running-upgrade-059000 dockerd[2891]: time="2024-07-19T14:41:26.920007476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 14:41:26 running-upgrade-059000 dockerd[2891]: time="2024-07-19T14:41:26.920084052Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8cb24c1cca9c6a31de3c7a7b5ed00bed7c047c5e2e679748760553538a4b5eb2 pid=17871 runtime=io.containerd.runc.v2
	Jul 19 14:41:27 running-upgrade-059000 cri-dockerd[2732]: time="2024-07-19T14:41:27Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 19 14:41:27 running-upgrade-059000 cri-dockerd[2732]: time="2024-07-19T14:41:27Z" level=error msg="ContainerStats resp: {0x400041e5c0 linux}"
	Jul 19 14:41:28 running-upgrade-059000 cri-dockerd[2732]: time="2024-07-19T14:41:28Z" level=error msg="ContainerStats resp: {0x40008b1a80 linux}"
	Jul 19 14:41:28 running-upgrade-059000 cri-dockerd[2732]: time="2024-07-19T14:41:28Z" level=error msg="ContainerStats resp: {0x40008b1bc0 linux}"
	Jul 19 14:41:28 running-upgrade-059000 cri-dockerd[2732]: time="2024-07-19T14:41:28Z" level=error msg="ContainerStats resp: {0x4000843740 linux}"
	Jul 19 14:41:28 running-upgrade-059000 cri-dockerd[2732]: time="2024-07-19T14:41:28Z" level=error msg="ContainerStats resp: {0x40008b1d00 linux}"
	Jul 19 14:41:28 running-upgrade-059000 cri-dockerd[2732]: time="2024-07-19T14:41:28Z" level=error msg="ContainerStats resp: {0x40005ca0c0 linux}"
	Jul 19 14:41:28 running-upgrade-059000 cri-dockerd[2732]: time="2024-07-19T14:41:28Z" level=error msg="ContainerStats resp: {0x40005ca240 linux}"
	Jul 19 14:41:28 running-upgrade-059000 cri-dockerd[2732]: time="2024-07-19T14:41:28Z" level=error msg="ContainerStats resp: {0x4000aadec0 linux}"
	Jul 19 14:41:32 running-upgrade-059000 cri-dockerd[2732]: time="2024-07-19T14:41:32Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 19 14:41:37 running-upgrade-059000 cri-dockerd[2732]: time="2024-07-19T14:41:37Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 19 14:41:38 running-upgrade-059000 cri-dockerd[2732]: time="2024-07-19T14:41:38Z" level=error msg="ContainerStats resp: {0x40008a5ac0 linux}"
	Jul 19 14:41:38 running-upgrade-059000 cri-dockerd[2732]: time="2024-07-19T14:41:38Z" level=error msg="ContainerStats resp: {0x40008a5c00 linux}"
	Jul 19 14:41:39 running-upgrade-059000 cri-dockerd[2732]: time="2024-07-19T14:41:39Z" level=error msg="ContainerStats resp: {0x4000a0be80 linux}"
	Jul 19 14:41:39 running-upgrade-059000 cri-dockerd[2732]: time="2024-07-19T14:41:39Z" level=error msg="ContainerStats resp: {0x4000a0bfc0 linux}"
	Jul 19 14:41:39 running-upgrade-059000 cri-dockerd[2732]: time="2024-07-19T14:41:39Z" level=error msg="ContainerStats resp: {0x4000902780 linux}"
	Jul 19 14:41:39 running-upgrade-059000 cri-dockerd[2732]: time="2024-07-19T14:41:39Z" level=error msg="ContainerStats resp: {0x4000902b80 linux}"
	Jul 19 14:41:39 running-upgrade-059000 cri-dockerd[2732]: time="2024-07-19T14:41:39Z" level=error msg="ContainerStats resp: {0x40005cad00 linux}"
	Jul 19 14:41:39 running-upgrade-059000 cri-dockerd[2732]: time="2024-07-19T14:41:39Z" level=error msg="ContainerStats resp: {0x40009032c0 linux}"
	Jul 19 14:41:39 running-upgrade-059000 cri-dockerd[2732]: time="2024-07-19T14:41:39Z" level=error msg="ContainerStats resp: {0x4000903700 linux}"
	Jul 19 14:41:40 running-upgrade-059000 cri-dockerd[2732]: time="2024-07-19T14:41:40Z" level=error msg="ContainerStats resp: {0x40004f2040 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	beef173fc0b5d       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   98b7da6116307
	8cb24c1cca9c6       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   608e407c7b272
	7fd9175f811d0       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   98b7da6116307
	d1257897620bd       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   608e407c7b272
	e6e962b4d57e0       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   68723075d5825
	d7cc7f846035a       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   270dcd5565050
	6ee3dcc8f3732       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   c4113cec4f344
	d10bbe034fb37       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   c3cf39848a1b5
	7352fbd733e72       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   66bff8a9bfae8
	add7facb14bfc       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   bf264a35b085a
	
	
	==> coredns [7fd9175f811d] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 424042934532343967.5977486803172088249. HINFO: read udp 10.244.0.2:33824->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 424042934532343967.5977486803172088249. HINFO: read udp 10.244.0.2:54851->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 424042934532343967.5977486803172088249. HINFO: read udp 10.244.0.2:47883->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 424042934532343967.5977486803172088249. HINFO: read udp 10.244.0.2:55950->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 424042934532343967.5977486803172088249. HINFO: read udp 10.244.0.2:33021->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 424042934532343967.5977486803172088249. HINFO: read udp 10.244.0.2:57024->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 424042934532343967.5977486803172088249. HINFO: read udp 10.244.0.2:48973->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8cb24c1cca9c] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2813038226151957487.8302037365954637495. HINFO: read udp 10.244.0.3:36351->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2813038226151957487.8302037365954637495. HINFO: read udp 10.244.0.3:52284->10.0.2.3:53: i/o timeout
	
	
	==> coredns [beef173fc0b5] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6344892551612974186.2531763400656919829. HINFO: read udp 10.244.0.2:47356->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6344892551612974186.2531763400656919829. HINFO: read udp 10.244.0.2:50606->10.0.2.3:53: i/o timeout
	
	
	==> coredns [d1257897620b] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4853251581143225727.7391845460709736165. HINFO: read udp 10.244.0.3:52798->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4853251581143225727.7391845460709736165. HINFO: read udp 10.244.0.3:32780->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4853251581143225727.7391845460709736165. HINFO: read udp 10.244.0.3:43266->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4853251581143225727.7391845460709736165. HINFO: read udp 10.244.0.3:58051->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4853251581143225727.7391845460709736165. HINFO: read udp 10.244.0.3:52533->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4853251581143225727.7391845460709736165. HINFO: read udp 10.244.0.3:50738->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4853251581143225727.7391845460709736165. HINFO: read udp 10.244.0.3:59323->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4853251581143225727.7391845460709736165. HINFO: read udp 10.244.0.3:56694->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4853251581143225727.7391845460709736165. HINFO: read udp 10.244.0.3:45036->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4853251581143225727.7391845460709736165. HINFO: read udp 10.244.0.3:60538->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-059000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-059000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de
	                    minikube.k8s.io/name=running-upgrade-059000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T07_37_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 14:37:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-059000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 14:41:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 14:37:24 +0000   Fri, 19 Jul 2024 14:37:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 14:37:24 +0000   Fri, 19 Jul 2024 14:37:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 14:37:24 +0000   Fri, 19 Jul 2024 14:37:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 14:37:24 +0000   Fri, 19 Jul 2024 14:37:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-059000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd8971820a5f431e9c704898f0c72035
	  System UUID:                bd8971820a5f431e9c704898f0c72035
	  Boot ID:                    0c4e77c6-7068-4076-84b7-373b1bd56bd3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-7rpzz                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-ptqvd                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-059000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-059000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-059000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-proxy-hkj79                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-059000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-059000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-059000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-059000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-059000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s   node-controller  Node running-upgrade-059000 event: Registered Node running-upgrade-059000 in Controller
	
	
	==> dmesg <==
	[  +1.933579] systemd-fstab-generator[885]: Ignoring "noauto" for root device
	[  +0.069202] systemd-fstab-generator[896]: Ignoring "noauto" for root device
	[  +0.065581] systemd-fstab-generator[907]: Ignoring "noauto" for root device
	[  +1.146056] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.079344] systemd-fstab-generator[1057]: Ignoring "noauto" for root device
	[  +0.058800] systemd-fstab-generator[1068]: Ignoring "noauto" for root device
	[  +2.938398] systemd-fstab-generator[1299]: Ignoring "noauto" for root device
	[Jul19 14:33] systemd-fstab-generator[1854]: Ignoring "noauto" for root device
	[  +2.482355] systemd-fstab-generator[2200]: Ignoring "noauto" for root device
	[  +0.145058] systemd-fstab-generator[2234]: Ignoring "noauto" for root device
	[  +0.073528] systemd-fstab-generator[2245]: Ignoring "noauto" for root device
	[  +0.088218] systemd-fstab-generator[2261]: Ignoring "noauto" for root device
	[  +1.671449] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.149180] systemd-fstab-generator[2689]: Ignoring "noauto" for root device
	[  +0.065398] systemd-fstab-generator[2700]: Ignoring "noauto" for root device
	[  +0.055420] systemd-fstab-generator[2711]: Ignoring "noauto" for root device
	[  +0.072157] systemd-fstab-generator[2725]: Ignoring "noauto" for root device
	[  +2.379975] systemd-fstab-generator[2877]: Ignoring "noauto" for root device
	[  +2.820473] systemd-fstab-generator[3269]: Ignoring "noauto" for root device
	[  +2.148303] systemd-fstab-generator[3606]: Ignoring "noauto" for root device
	[ +18.756388] kauditd_printk_skb: 68 callbacks suppressed
	[Jul19 14:37] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.401085] systemd-fstab-generator[10920]: Ignoring "noauto" for root device
	[  +5.628221] systemd-fstab-generator[11510]: Ignoring "noauto" for root device
	[  +0.447013] systemd-fstab-generator[11646]: Ignoring "noauto" for root device
	
	
	==> etcd [6ee3dcc8f373] <==
	{"level":"info","ts":"2024-07-19T14:37:20.091Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-19T14:37:20.091Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-19T14:37:20.100Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T14:37:20.100Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T14:37:20.100Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T14:37:20.100Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-19T14:37:20.100Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-19T14:37:20.667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-19T14:37:20.667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-19T14:37:20.667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-19T14:37:20.667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-19T14:37:20.667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-19T14:37:20.667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-19T14:37:20.667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-19T14:37:20.667Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-059000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T14:37:20.667Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T14:37:20.667Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T14:37:20.668Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T14:37:20.668Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T14:37:20.668Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T14:37:20.668Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T14:37:20.668Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T14:37:20.668Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T14:37:20.668Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T14:37:20.668Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	
	
	==> kernel <==
	 14:41:41 up 9 min,  0 users,  load average: 0.32, 0.42, 0.25
	Linux running-upgrade-059000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [7352fbd733e7] <==
	I0719 14:37:21.874935       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 14:37:21.874947       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 14:37:21.874956       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0719 14:37:21.876426       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0719 14:37:21.877139       1 cache.go:39] Caches are synced for autoregister controller
	I0719 14:37:21.896734       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0719 14:37:21.899320       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0719 14:37:22.632049       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0719 14:37:22.778980       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0719 14:37:22.780675       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0719 14:37:22.780773       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 14:37:22.899867       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 14:37:22.909873       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 14:37:22.970672       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0719 14:37:22.972895       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0719 14:37:22.973277       1 controller.go:611] quota admission added evaluator for: endpoints
	I0719 14:37:22.974623       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 14:37:23.906339       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0719 14:37:24.560458       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0719 14:37:24.563802       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0719 14:37:24.592316       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0719 14:37:24.611314       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 14:37:37.464648       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0719 14:37:37.517710       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0719 14:37:38.548450       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [add7facb14bf] <==
	I0719 14:37:37.525216       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0719 14:37:37.525220       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0719 14:37:37.526900       1 shared_informer.go:262] Caches are synced for node
	I0719 14:37:37.526915       1 range_allocator.go:173] Starting range CIDR allocator
	I0719 14:37:37.526917       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0719 14:37:37.526920       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0719 14:37:37.527115       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-ptqvd"
	I0719 14:37:37.530941       1 shared_informer.go:262] Caches are synced for PV protection
	I0719 14:37:37.532100       1 range_allocator.go:374] Set node running-upgrade-059000 PodCIDR to [10.244.0.0/24]
	I0719 14:37:37.532313       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-7rpzz"
	I0719 14:37:37.537912       1 shared_informer.go:262] Caches are synced for TTL
	I0719 14:37:37.543110       1 shared_informer.go:262] Caches are synced for namespace
	I0719 14:37:37.656708       1 shared_informer.go:262] Caches are synced for taint
	I0719 14:37:37.656786       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0719 14:37:37.656929       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0719 14:37:37.656952       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-059000. Assuming now as a timestamp.
	I0719 14:37:37.657083       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0719 14:37:37.657189       1 event.go:294] "Event occurred" object="running-upgrade-059000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-059000 event: Registered Node running-upgrade-059000 in Controller"
	I0719 14:37:37.733695       1 shared_informer.go:262] Caches are synced for crt configmap
	I0719 14:37:37.734842       1 shared_informer.go:262] Caches are synced for resource quota
	I0719 14:37:37.744062       1 shared_informer.go:262] Caches are synced for resource quota
	I0719 14:37:37.757672       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0719 14:37:38.162435       1 shared_informer.go:262] Caches are synced for garbage collector
	I0719 14:37:38.213609       1 shared_informer.go:262] Caches are synced for garbage collector
	I0719 14:37:38.213708       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [e6e962b4d57e] <==
	I0719 14:37:38.537987       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0719 14:37:38.538009       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0719 14:37:38.538019       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0719 14:37:38.546663       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0719 14:37:38.546670       1 server_others.go:206] "Using iptables Proxier"
	I0719 14:37:38.546682       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0719 14:37:38.546779       1 server.go:661] "Version info" version="v1.24.1"
	I0719 14:37:38.546783       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 14:37:38.547007       1 config.go:317] "Starting service config controller"
	I0719 14:37:38.547013       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0719 14:37:38.547020       1 config.go:226] "Starting endpoint slice config controller"
	I0719 14:37:38.547022       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0719 14:37:38.547671       1 config.go:444] "Starting node config controller"
	I0719 14:37:38.547700       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0719 14:37:38.647348       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0719 14:37:38.647364       1 shared_informer.go:262] Caches are synced for service config
	I0719 14:37:38.647785       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [d10bbe034fb3] <==
	W0719 14:37:21.858388       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 14:37:21.858395       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 14:37:21.858455       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 14:37:21.858464       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 14:37:21.858538       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 14:37:21.858545       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 14:37:21.858623       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 14:37:21.858633       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 14:37:21.858645       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 14:37:21.858648       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 14:37:21.858691       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 14:37:21.858701       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 14:37:21.858736       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 14:37:21.858740       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 14:37:21.858789       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 14:37:21.858797       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 14:37:21.858839       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 14:37:21.858920       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 14:37:21.858994       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 14:37:21.859008       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 14:37:22.723550       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 14:37:22.723637       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 14:37:22.731550       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 14:37:22.731566       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0719 14:37:23.352757       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Fri 2024-07-19 14:32:35 UTC, ends at Fri 2024-07-19 14:41:42 UTC. --
	Jul 19 14:37:26 running-upgrade-059000 kubelet[11516]: E0719 14:37:26.587252   11516 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-059000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-059000"
	Jul 19 14:37:26 running-upgrade-059000 kubelet[11516]: I0719 14:37:26.785406   11516 request.go:601] Waited for 1.105007938s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jul 19 14:37:26 running-upgrade-059000 kubelet[11516]: E0719 14:37:26.788414   11516 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-059000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-059000"
	Jul 19 14:37:37 running-upgrade-059000 kubelet[11516]: I0719 14:37:37.469587   11516 topology_manager.go:200] "Topology Admit Handler"
	Jul 19 14:37:37 running-upgrade-059000 kubelet[11516]: I0719 14:37:37.620824   11516 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 19 14:37:37 running-upgrade-059000 kubelet[11516]: I0719 14:37:37.620898   11516 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e7d6cfd-620f-43ae-a421-de7c0c174645-xtables-lock\") pod \"kube-proxy-hkj79\" (UID: \"4e7d6cfd-620f-43ae-a421-de7c0c174645\") " pod="kube-system/kube-proxy-hkj79"
	Jul 19 14:37:37 running-upgrade-059000 kubelet[11516]: I0719 14:37:37.620913   11516 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trpbs\" (UniqueName: \"kubernetes.io/projected/4e7d6cfd-620f-43ae-a421-de7c0c174645-kube-api-access-trpbs\") pod \"kube-proxy-hkj79\" (UID: \"4e7d6cfd-620f-43ae-a421-de7c0c174645\") " pod="kube-system/kube-proxy-hkj79"
	Jul 19 14:37:37 running-upgrade-059000 kubelet[11516]: I0719 14:37:37.620928   11516 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4e7d6cfd-620f-43ae-a421-de7c0c174645-kube-proxy\") pod \"kube-proxy-hkj79\" (UID: \"4e7d6cfd-620f-43ae-a421-de7c0c174645\") " pod="kube-system/kube-proxy-hkj79"
	Jul 19 14:37:37 running-upgrade-059000 kubelet[11516]: I0719 14:37:37.621082   11516 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e7d6cfd-620f-43ae-a421-de7c0c174645-lib-modules\") pod \"kube-proxy-hkj79\" (UID: \"4e7d6cfd-620f-43ae-a421-de7c0c174645\") " pod="kube-system/kube-proxy-hkj79"
	Jul 19 14:37:37 running-upgrade-059000 kubelet[11516]: I0719 14:37:37.621184   11516 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 19 14:37:37 running-upgrade-059000 kubelet[11516]: I0719 14:37:37.665312   11516 topology_manager.go:200] "Topology Admit Handler"
	Jul 19 14:37:37 running-upgrade-059000 kubelet[11516]: E0719 14:37:37.725233   11516 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 19 14:37:37 running-upgrade-059000 kubelet[11516]: E0719 14:37:37.725248   11516 projected.go:192] Error preparing data for projected volume kube-api-access-trpbs for pod kube-system/kube-proxy-hkj79: configmap "kube-root-ca.crt" not found
	Jul 19 14:37:37 running-upgrade-059000 kubelet[11516]: E0719 14:37:37.725296   11516 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/4e7d6cfd-620f-43ae-a421-de7c0c174645-kube-api-access-trpbs podName:4e7d6cfd-620f-43ae-a421-de7c0c174645 nodeName:}" failed. No retries permitted until 2024-07-19 14:37:38.225282032 +0000 UTC m=+13.674823909 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-trpbs" (UniqueName: "kubernetes.io/projected/4e7d6cfd-620f-43ae-a421-de7c0c174645-kube-api-access-trpbs") pod "kube-proxy-hkj79" (UID: "4e7d6cfd-620f-43ae-a421-de7c0c174645") : configmap "kube-root-ca.crt" not found
	Jul 19 14:37:37 running-upgrade-059000 kubelet[11516]: I0719 14:37:37.823685   11516 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s52kc\" (UniqueName: \"kubernetes.io/projected/61f6bb4d-a495-43b6-a6ff-3256cc05da98-kube-api-access-s52kc\") pod \"storage-provisioner\" (UID: \"61f6bb4d-a495-43b6-a6ff-3256cc05da98\") " pod="kube-system/storage-provisioner"
	Jul 19 14:37:37 running-upgrade-059000 kubelet[11516]: I0719 14:37:37.823719   11516 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/61f6bb4d-a495-43b6-a6ff-3256cc05da98-tmp\") pod \"storage-provisioner\" (UID: \"61f6bb4d-a495-43b6-a6ff-3256cc05da98\") " pod="kube-system/storage-provisioner"
	Jul 19 14:37:38 running-upgrade-059000 kubelet[11516]: I0719 14:37:38.768383   11516 topology_manager.go:200] "Topology Admit Handler"
	Jul 19 14:37:38 running-upgrade-059000 kubelet[11516]: I0719 14:37:38.770769   11516 topology_manager.go:200] "Topology Admit Handler"
	Jul 19 14:37:38 running-upgrade-059000 kubelet[11516]: I0719 14:37:38.939081   11516 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/291cca7c-c5d1-47e1-a6d5-6618e3e52528-config-volume\") pod \"coredns-6d4b75cb6d-7rpzz\" (UID: \"291cca7c-c5d1-47e1-a6d5-6618e3e52528\") " pod="kube-system/coredns-6d4b75cb6d-7rpzz"
	Jul 19 14:37:38 running-upgrade-059000 kubelet[11516]: I0719 14:37:38.939208   11516 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c564a52-471e-4e6b-bf40-26af8603f192-config-volume\") pod \"coredns-6d4b75cb6d-ptqvd\" (UID: \"5c564a52-471e-4e6b-bf40-26af8603f192\") " pod="kube-system/coredns-6d4b75cb6d-ptqvd"
	Jul 19 14:37:38 running-upgrade-059000 kubelet[11516]: I0719 14:37:38.939254   11516 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmdxp\" (UniqueName: \"kubernetes.io/projected/291cca7c-c5d1-47e1-a6d5-6618e3e52528-kube-api-access-bmdxp\") pod \"coredns-6d4b75cb6d-7rpzz\" (UID: \"291cca7c-c5d1-47e1-a6d5-6618e3e52528\") " pod="kube-system/coredns-6d4b75cb6d-7rpzz"
	Jul 19 14:37:38 running-upgrade-059000 kubelet[11516]: I0719 14:37:38.939272   11516 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rjw9\" (UniqueName: \"kubernetes.io/projected/5c564a52-471e-4e6b-bf40-26af8603f192-kube-api-access-7rjw9\") pod \"coredns-6d4b75cb6d-ptqvd\" (UID: \"5c564a52-471e-4e6b-bf40-26af8603f192\") " pod="kube-system/coredns-6d4b75cb6d-ptqvd"
	Jul 19 14:37:39 running-upgrade-059000 kubelet[11516]: I0719 14:37:39.833597   11516 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="98b7da6116307cfa43ec9f6bfb603c83bae5eddb7320dad22bfad9c880ab2623"
	Jul 19 14:41:27 running-upgrade-059000 kubelet[11516]: I0719 14:41:27.186222   11516 scope.go:110] "RemoveContainer" containerID="8cc60ed45693165823ccd2f528dca0199db4121af1857cf091c63b0a42adba3c"
	Jul 19 14:41:27 running-upgrade-059000 kubelet[11516]: I0719 14:41:27.198327   11516 scope.go:110] "RemoveContainer" containerID="ed5753ac5bdd6f99e217b8f9a45fc314102e09a7a37bc5d057fdf65a24129402"
	
	
	==> storage-provisioner [d7cc7f846035] <==
	I0719 14:37:38.180053       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 14:37:38.186677       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 14:37:38.186700       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 14:37:38.190012       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 14:37:38.190261       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-059000_b51bc39b-2c33-42bf-bc44-4a686e5bd0c8!
	I0719 14:37:38.190305       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ab1e6596-89be-4899-9ac3-17f740ecb055", APIVersion:"v1", ResourceVersion:"356", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-059000_b51bc39b-2c33-42bf-bc44-4a686e5bd0c8 became leader
	I0719 14:37:38.290412       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-059000_b51bc39b-2c33-42bf-bc44-4a686e5bd0c8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-059000 -n running-upgrade-059000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-059000 -n running-upgrade-059000: exit status 2 (15.712895958s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-059000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-059000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-059000
--- FAIL: TestRunningBinaryUpgrade (585.94s)

                                                
                                    
x
+
TestKubernetesUpgrade (16.93s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-997000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-997000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.708008792s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-997000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-997000" primary control-plane node in "kubernetes-upgrade-997000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-997000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:35:12.174334    8501 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:35:12.174467    8501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:35:12.174471    8501 out.go:304] Setting ErrFile to fd 2...
	I0719 07:35:12.174473    8501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:35:12.174640    8501 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:35:12.175866    8501 out.go:298] Setting JSON to false
	I0719 07:35:12.192552    8501 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5681,"bootTime":1721394031,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:35:12.192628    8501 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:35:12.198084    8501 out.go:177] * [kubernetes-upgrade-997000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:35:12.205102    8501 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:35:12.205140    8501 notify.go:220] Checking for updates...
	I0719 07:35:12.212065    8501 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:35:12.215069    8501 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:35:12.218040    8501 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:35:12.220978    8501 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:35:12.224035    8501 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:35:12.227327    8501 config.go:182] Loaded profile config "multinode-023000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:35:12.227392    8501 config.go:182] Loaded profile config "running-upgrade-059000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:35:12.227432    8501 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:35:12.229978    8501 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:35:12.237045    8501 start.go:297] selected driver: qemu2
	I0719 07:35:12.237052    8501 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:35:12.237059    8501 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:35:12.239458    8501 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:35:12.252546    8501 out.go:177] * Automatically selected the socket_vmnet network
	I0719 07:35:12.256136    8501 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 07:35:12.256168    8501 cni.go:84] Creating CNI manager for ""
	I0719 07:35:12.256177    8501 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 07:35:12.256206    8501 start.go:340] cluster config:
	{Name:kubernetes-upgrade-997000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-997000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:35:12.260061    8501 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:35:12.267001    8501 out.go:177] * Starting "kubernetes-upgrade-997000" primary control-plane node in "kubernetes-upgrade-997000" cluster
	I0719 07:35:12.271017    8501 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 07:35:12.271040    8501 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0719 07:35:12.271055    8501 cache.go:56] Caching tarball of preloaded images
	I0719 07:35:12.271122    8501 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:35:12.271135    8501 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0719 07:35:12.271202    8501 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/kubernetes-upgrade-997000/config.json ...
	I0719 07:35:12.271214    8501 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/kubernetes-upgrade-997000/config.json: {Name:mk02433a02410550a5f51384f09e8c50130f53c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:35:12.271534    8501 start.go:360] acquireMachinesLock for kubernetes-upgrade-997000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:35:12.271569    8501 start.go:364] duration metric: took 28.167µs to acquireMachinesLock for "kubernetes-upgrade-997000"
	I0719 07:35:12.271580    8501 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-997000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-997000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:35:12.271627    8501 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:35:12.276037    8501 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 07:35:12.293313    8501 start.go:159] libmachine.API.Create for "kubernetes-upgrade-997000" (driver="qemu2")
	I0719 07:35:12.293341    8501 client.go:168] LocalClient.Create starting
	I0719 07:35:12.293404    8501 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:35:12.293434    8501 main.go:141] libmachine: Decoding PEM data...
	I0719 07:35:12.293452    8501 main.go:141] libmachine: Parsing certificate...
	I0719 07:35:12.293488    8501 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:35:12.293511    8501 main.go:141] libmachine: Decoding PEM data...
	I0719 07:35:12.293516    8501 main.go:141] libmachine: Parsing certificate...
	I0719 07:35:12.293909    8501 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:35:12.410801    8501 main.go:141] libmachine: Creating SSH key...
	I0719 07:35:12.492966    8501 main.go:141] libmachine: Creating Disk image...
	I0719 07:35:12.492974    8501 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:35:12.493192    8501 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/disk.qcow2
	I0719 07:35:12.502803    8501 main.go:141] libmachine: STDOUT: 
	I0719 07:35:12.502826    8501 main.go:141] libmachine: STDERR: 
	I0719 07:35:12.502873    8501 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/disk.qcow2 +20000M
	I0719 07:35:12.511048    8501 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:35:12.511066    8501 main.go:141] libmachine: STDERR: 
	I0719 07:35:12.511085    8501 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/disk.qcow2
	I0719 07:35:12.511089    8501 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:35:12.511102    8501 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:35:12.511135    8501 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:49:67:a2:44:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/disk.qcow2
	I0719 07:35:12.512852    8501 main.go:141] libmachine: STDOUT: 
	I0719 07:35:12.512870    8501 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:35:12.512893    8501 client.go:171] duration metric: took 219.548583ms to LocalClient.Create
	I0719 07:35:14.515100    8501 start.go:128] duration metric: took 2.243454833s to createHost
	I0719 07:35:14.515219    8501 start.go:83] releasing machines lock for "kubernetes-upgrade-997000", held for 2.243642083s
	W0719 07:35:14.515309    8501 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:35:14.522572    8501 out.go:177] * Deleting "kubernetes-upgrade-997000" in qemu2 ...
	W0719 07:35:14.543285    8501 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:35:14.543317    8501 start.go:729] Will try again in 5 seconds ...
	I0719 07:35:19.544762    8501 start.go:360] acquireMachinesLock for kubernetes-upgrade-997000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:35:19.545304    8501 start.go:364] duration metric: took 435µs to acquireMachinesLock for "kubernetes-upgrade-997000"
	I0719 07:35:19.545387    8501 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-997000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-997000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:35:19.545581    8501 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:35:19.554248    8501 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 07:35:19.602406    8501 start.go:159] libmachine.API.Create for "kubernetes-upgrade-997000" (driver="qemu2")
	I0719 07:35:19.602460    8501 client.go:168] LocalClient.Create starting
	I0719 07:35:19.602578    8501 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:35:19.602657    8501 main.go:141] libmachine: Decoding PEM data...
	I0719 07:35:19.602676    8501 main.go:141] libmachine: Parsing certificate...
	I0719 07:35:19.602734    8501 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:35:19.602779    8501 main.go:141] libmachine: Decoding PEM data...
	I0719 07:35:19.602807    8501 main.go:141] libmachine: Parsing certificate...
	I0719 07:35:19.603378    8501 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:35:19.733734    8501 main.go:141] libmachine: Creating SSH key...
	I0719 07:35:19.791631    8501 main.go:141] libmachine: Creating Disk image...
	I0719 07:35:19.791637    8501 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:35:19.791831    8501 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/disk.qcow2
	I0719 07:35:19.801351    8501 main.go:141] libmachine: STDOUT: 
	I0719 07:35:19.801373    8501 main.go:141] libmachine: STDERR: 
	I0719 07:35:19.801426    8501 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/disk.qcow2 +20000M
	I0719 07:35:19.809327    8501 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:35:19.809341    8501 main.go:141] libmachine: STDERR: 
	I0719 07:35:19.809353    8501 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/disk.qcow2
	I0719 07:35:19.809357    8501 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:35:19.809365    8501 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:35:19.809390    8501 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:07:0b:14:e5:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/disk.qcow2
	I0719 07:35:19.811047    8501 main.go:141] libmachine: STDOUT: 
	I0719 07:35:19.811060    8501 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:35:19.811077    8501 client.go:171] duration metric: took 208.611375ms to LocalClient.Create
	I0719 07:35:21.813270    8501 start.go:128] duration metric: took 2.267665125s to createHost
	I0719 07:35:21.813375    8501 start.go:83] releasing machines lock for "kubernetes-upgrade-997000", held for 2.268061208s
	W0719 07:35:21.813775    8501 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-997000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-997000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:35:21.823339    8501 out.go:177] 
	W0719 07:35:21.828534    8501 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:35:21.828569    8501 out.go:239] * 
	* 
	W0719 07:35:21.831043    8501 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:35:21.840495    8501 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-997000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-997000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-997000: (1.804118792s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-997000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-997000 status --format={{.Host}}: exit status 7 (57.657791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-997000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-997000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.18696475s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-997000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-997000" primary control-plane node in "kubernetes-upgrade-997000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-997000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-997000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:35:23.748601    8528 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:35:23.748745    8528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:35:23.748749    8528 out.go:304] Setting ErrFile to fd 2...
	I0719 07:35:23.748751    8528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:35:23.748885    8528 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:35:23.749869    8528 out.go:298] Setting JSON to false
	I0719 07:35:23.766263    8528 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5692,"bootTime":1721394031,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:35:23.766334    8528 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:35:23.770074    8528 out.go:177] * [kubernetes-upgrade-997000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:35:23.778177    8528 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:35:23.778243    8528 notify.go:220] Checking for updates...
	I0719 07:35:23.785107    8528 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:35:23.788107    8528 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:35:23.791099    8528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:35:23.794072    8528 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:35:23.797150    8528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:35:23.800319    8528 config.go:182] Loaded profile config "kubernetes-upgrade-997000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0719 07:35:23.800598    8528 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:35:23.805043    8528 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 07:35:23.811989    8528 start.go:297] selected driver: qemu2
	I0719 07:35:23.811996    8528 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-997000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-997000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:35:23.812049    8528 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:35:23.814281    8528 cni.go:84] Creating CNI manager for ""
	I0719 07:35:23.814298    8528 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:35:23.814314    8528 start.go:340] cluster config:
	{Name:kubernetes-upgrade-997000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-997000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:35:23.817620    8528 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:35:23.824904    8528 out.go:177] * Starting "kubernetes-upgrade-997000" primary control-plane node in "kubernetes-upgrade-997000" cluster
	I0719 07:35:23.829070    8528 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 07:35:23.829086    8528 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0719 07:35:23.829097    8528 cache.go:56] Caching tarball of preloaded images
	I0719 07:35:23.829158    8528 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:35:23.829164    8528 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0719 07:35:23.829219    8528 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/kubernetes-upgrade-997000/config.json ...
	I0719 07:35:23.829622    8528 start.go:360] acquireMachinesLock for kubernetes-upgrade-997000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:35:23.829647    8528 start.go:364] duration metric: took 20.166µs to acquireMachinesLock for "kubernetes-upgrade-997000"
	I0719 07:35:23.829655    8528 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:35:23.829660    8528 fix.go:54] fixHost starting: 
	I0719 07:35:23.829775    8528 fix.go:112] recreateIfNeeded on kubernetes-upgrade-997000: state=Stopped err=<nil>
	W0719 07:35:23.829785    8528 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:35:23.833874    8528 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-997000" ...
	I0719 07:35:23.842060    8528 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:35:23.842096    8528 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:07:0b:14:e5:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/disk.qcow2
	I0719 07:35:23.844090    8528 main.go:141] libmachine: STDOUT: 
	I0719 07:35:23.844113    8528 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:35:23.844140    8528 fix.go:56] duration metric: took 14.4795ms for fixHost
	I0719 07:35:23.844144    8528 start.go:83] releasing machines lock for "kubernetes-upgrade-997000", held for 14.492708ms
	W0719 07:35:23.844150    8528 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:35:23.844182    8528 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:35:23.844186    8528 start.go:729] Will try again in 5 seconds ...
	I0719 07:35:28.846161    8528 start.go:360] acquireMachinesLock for kubernetes-upgrade-997000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:35:28.846683    8528 start.go:364] duration metric: took 386.208µs to acquireMachinesLock for "kubernetes-upgrade-997000"
	I0719 07:35:28.846776    8528 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:35:28.846794    8528 fix.go:54] fixHost starting: 
	I0719 07:35:28.847529    8528 fix.go:112] recreateIfNeeded on kubernetes-upgrade-997000: state=Stopped err=<nil>
	W0719 07:35:28.847556    8528 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:35:28.857548    8528 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-997000" ...
	I0719 07:35:28.860439    8528 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:35:28.860689    8528 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:07:0b:14:e5:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubernetes-upgrade-997000/disk.qcow2
	I0719 07:35:28.870389    8528 main.go:141] libmachine: STDOUT: 
	I0719 07:35:28.870450    8528 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:35:28.870547    8528 fix.go:56] duration metric: took 23.753916ms for fixHost
	I0719 07:35:28.870564    8528 start.go:83] releasing machines lock for "kubernetes-upgrade-997000", held for 23.857833ms
	W0719 07:35:28.870825    8528 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-997000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-997000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:35:28.878421    8528 out.go:177] 
	W0719 07:35:28.881495    8528 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:35:28.881521    8528 out.go:239] * 
	* 
	W0719 07:35:28.884115    8528 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:35:28.892464    8528 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-997000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-997000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-997000 version --output=json: exit status 1 (61.03075ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-997000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-19 07:35:28.968619 -0700 PDT m=+958.245892959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-997000 -n kubernetes-upgrade-997000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-997000 -n kubernetes-upgrade-997000: exit status 7 (32.57125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-997000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-997000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-997000
--- FAIL: TestKubernetesUpgrade (16.93s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.11s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19302
- KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1487192201/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.11s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.02s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19302
- KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1481306443/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (577.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1996949131 start -p stopped-upgrade-109000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1996949131 start -p stopped-upgrade-109000 --memory=2200 --vm-driver=qemu2 : (41.104304583s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1996949131 -p stopped-upgrade-109000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1996949131 -p stopped-upgrade-109000 stop: (12.112917875s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-109000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-109000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m44.473311s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-109000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-109000" primary control-plane node in "stopped-upgrade-109000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-109000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:36:23.303515    8572 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:36:23.303679    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:36:23.303683    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:36:23.303686    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:36:23.303842    8572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:36:23.305130    8572 out.go:298] Setting JSON to false
	I0719 07:36:23.324734    8572 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5752,"bootTime":1721394031,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:36:23.324809    8572 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:36:23.330111    8572 out.go:177] * [stopped-upgrade-109000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:36:23.337058    8572 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:36:23.337124    8572 notify.go:220] Checking for updates...
	I0719 07:36:23.344177    8572 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:36:23.347099    8572 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:36:23.351094    8572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:36:23.354077    8572 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:36:23.357063    8572 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:36:23.360270    8572 config.go:182] Loaded profile config "stopped-upgrade-109000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:36:23.364031    8572 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0719 07:36:23.366964    8572 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:36:23.371009    8572 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 07:36:23.381081    8572 start.go:297] selected driver: qemu2
	I0719 07:36:23.381089    8572 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-109000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51405 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-109000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0719 07:36:23.381155    8572 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:36:23.384087    8572 cni.go:84] Creating CNI manager for ""
	I0719 07:36:23.384107    8572 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:36:23.384133    8572 start.go:340] cluster config:
	{Name:stopped-upgrade-109000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51405 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-109000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0719 07:36:23.384192    8572 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:36:23.392108    8572 out.go:177] * Starting "stopped-upgrade-109000" primary control-plane node in "stopped-upgrade-109000" cluster
	I0719 07:36:23.396037    8572 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0719 07:36:23.396054    8572 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0719 07:36:23.396067    8572 cache.go:56] Caching tarball of preloaded images
	I0719 07:36:23.396143    8572 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:36:23.396149    8572 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0719 07:36:23.396213    8572 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/config.json ...
	I0719 07:36:23.396712    8572 start.go:360] acquireMachinesLock for stopped-upgrade-109000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:36:23.396752    8572 start.go:364] duration metric: took 33.125µs to acquireMachinesLock for "stopped-upgrade-109000"
	I0719 07:36:23.396761    8572 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:36:23.396767    8572 fix.go:54] fixHost starting: 
	I0719 07:36:23.396888    8572 fix.go:112] recreateIfNeeded on stopped-upgrade-109000: state=Stopped err=<nil>
	W0719 07:36:23.396897    8572 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:36:23.405043    8572 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-109000" ...
	I0719 07:36:23.409009    8572 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:36:23.409079    8572 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/stopped-upgrade-109000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/stopped-upgrade-109000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/stopped-upgrade-109000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51371-:22,hostfwd=tcp::51372-:2376,hostname=stopped-upgrade-109000 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/stopped-upgrade-109000/disk.qcow2
	I0719 07:36:23.459830    8572 main.go:141] libmachine: STDOUT: 
	I0719 07:36:23.459860    8572 main.go:141] libmachine: STDERR: 
	I0719 07:36:23.459865    8572 main.go:141] libmachine: Waiting for VM to start (ssh -p 51371 docker@127.0.0.1)...
	I0719 07:36:43.343788    8572 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/config.json ...
	I0719 07:36:43.344492    8572 machine.go:94] provisionDockerMachine start ...
	I0719 07:36:43.344679    8572 main.go:141] libmachine: Using SSH client type: native
	I0719 07:36:43.345180    8572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c42a10] 0x100c45270 <nil>  [] 0s} localhost 51371 <nil> <nil>}
	I0719 07:36:43.345194    8572 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 07:36:43.440924    8572 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 07:36:43.440968    8572 buildroot.go:166] provisioning hostname "stopped-upgrade-109000"
	I0719 07:36:43.441080    8572 main.go:141] libmachine: Using SSH client type: native
	I0719 07:36:43.441381    8572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c42a10] 0x100c45270 <nil>  [] 0s} localhost 51371 <nil> <nil>}
	I0719 07:36:43.441394    8572 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-109000 && echo "stopped-upgrade-109000" | sudo tee /etc/hostname
	I0719 07:36:43.530844    8572 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-109000
	
	I0719 07:36:43.530948    8572 main.go:141] libmachine: Using SSH client type: native
	I0719 07:36:43.531170    8572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c42a10] 0x100c45270 <nil>  [] 0s} localhost 51371 <nil> <nil>}
	I0719 07:36:43.531186    8572 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-109000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-109000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-109000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 07:36:43.606133    8572 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 07:36:43.606146    8572 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-5980/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-5980/.minikube}
	I0719 07:36:43.606154    8572 buildroot.go:174] setting up certificates
	I0719 07:36:43.606158    8572 provision.go:84] configureAuth start
	I0719 07:36:43.606162    8572 provision.go:143] copyHostCerts
	I0719 07:36:43.606244    8572 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-5980/.minikube/ca.pem, removing ...
	I0719 07:36:43.606253    8572 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-5980/.minikube/ca.pem
	I0719 07:36:43.606489    8572 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-5980/.minikube/ca.pem (1078 bytes)
	I0719 07:36:43.606667    8572 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-5980/.minikube/cert.pem, removing ...
	I0719 07:36:43.606671    8572 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-5980/.minikube/cert.pem
	I0719 07:36:43.606724    8572 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-5980/.minikube/cert.pem (1123 bytes)
	I0719 07:36:43.606821    8572 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-5980/.minikube/key.pem, removing ...
	I0719 07:36:43.606826    8572 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-5980/.minikube/key.pem
	I0719 07:36:43.606872    8572 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-5980/.minikube/key.pem (1679 bytes)
	I0719 07:36:43.606953    8572 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-109000 san=[127.0.0.1 localhost minikube stopped-upgrade-109000]
	I0719 07:36:43.768686    8572 provision.go:177] copyRemoteCerts
	I0719 07:36:43.768728    8572 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 07:36:43.768737    8572 sshutil.go:53] new ssh client: &{IP:localhost Port:51371 SSHKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/stopped-upgrade-109000/id_rsa Username:docker}
	I0719 07:36:43.806159    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0719 07:36:43.813135    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0719 07:36:43.819927    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 07:36:43.826396    8572 provision.go:87] duration metric: took 220.330334ms to configureAuth
	I0719 07:36:43.826409    8572 buildroot.go:189] setting minikube options for container-runtime
	I0719 07:36:43.826525    8572 config.go:182] Loaded profile config "stopped-upgrade-109000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:36:43.826565    8572 main.go:141] libmachine: Using SSH client type: native
	I0719 07:36:43.826652    8572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c42a10] 0x100c45270 <nil>  [] 0s} localhost 51371 <nil> <nil>}
	I0719 07:36:43.826659    8572 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 07:36:43.896346    8572 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 07:36:43.896355    8572 buildroot.go:70] root file system type: tmpfs
	I0719 07:36:43.896406    8572 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 07:36:43.896462    8572 main.go:141] libmachine: Using SSH client type: native
	I0719 07:36:43.896577    8572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c42a10] 0x100c45270 <nil>  [] 0s} localhost 51371 <nil> <nil>}
	I0719 07:36:43.896613    8572 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 07:36:43.972903    8572 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 07:36:43.972964    8572 main.go:141] libmachine: Using SSH client type: native
	I0719 07:36:43.973135    8572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c42a10] 0x100c45270 <nil>  [] 0s} localhost 51371 <nil> <nil>}
	I0719 07:36:43.973144    8572 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 07:36:44.324481    8572 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 07:36:44.324496    8572 machine.go:97] duration metric: took 980.451541ms to provisionDockerMachine
	I0719 07:36:44.324503    8572 start.go:293] postStartSetup for "stopped-upgrade-109000" (driver="qemu2")
	I0719 07:36:44.324510    8572 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 07:36:44.324580    8572 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 07:36:44.324593    8572 sshutil.go:53] new ssh client: &{IP:localhost Port:51371 SSHKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/stopped-upgrade-109000/id_rsa Username:docker}
	I0719 07:36:44.361462    8572 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 07:36:44.362935    8572 info.go:137] Remote host: Buildroot 2021.02.12
	I0719 07:36:44.362942    8572 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-5980/.minikube/addons for local assets ...
	I0719 07:36:44.363043    8572 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-5980/.minikube/files for local assets ...
	I0719 07:36:44.363165    8572 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-5980/.minikube/files/etc/ssl/certs/64732.pem -> 64732.pem in /etc/ssl/certs
	I0719 07:36:44.363294    8572 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 07:36:44.366303    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/files/etc/ssl/certs/64732.pem --> /etc/ssl/certs/64732.pem (1708 bytes)
	I0719 07:36:44.375037    8572 start.go:296] duration metric: took 50.549666ms for postStartSetup
	I0719 07:36:44.375058    8572 fix.go:56] duration metric: took 20.980955084s for fixHost
	I0719 07:36:44.375109    8572 main.go:141] libmachine: Using SSH client type: native
	I0719 07:36:44.375242    8572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c42a10] 0x100c45270 <nil>  [] 0s} localhost 51371 <nil> <nil>}
	I0719 07:36:44.375247    8572 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0719 07:36:44.444355    8572 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721399804.194076629
	
	I0719 07:36:44.444365    8572 fix.go:216] guest clock: 1721399804.194076629
	I0719 07:36:44.444369    8572 fix.go:229] Guest: 2024-07-19 07:36:44.194076629 -0700 PDT Remote: 2024-07-19 07:36:44.375059 -0700 PDT m=+21.106254293 (delta=-180.982371ms)
	I0719 07:36:44.444382    8572 fix.go:200] guest clock delta is within tolerance: -180.982371ms
	I0719 07:36:44.444385    8572 start.go:83] releasing machines lock for "stopped-upgrade-109000", held for 21.050324041s
	I0719 07:36:44.444456    8572 ssh_runner.go:195] Run: cat /version.json
	I0719 07:36:44.444466    8572 sshutil.go:53] new ssh client: &{IP:localhost Port:51371 SSHKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/stopped-upgrade-109000/id_rsa Username:docker}
	I0719 07:36:44.444456    8572 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 07:36:44.444502    8572 sshutil.go:53] new ssh client: &{IP:localhost Port:51371 SSHKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/stopped-upgrade-109000/id_rsa Username:docker}
	W0719 07:36:44.445065    8572 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51371: connect: connection refused
	I0719 07:36:44.445086    8572 retry.go:31] will retry after 171.846708ms: dial tcp [::1]:51371: connect: connection refused
	W0719 07:36:44.657666    8572 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0719 07:36:44.657738    8572 ssh_runner.go:195] Run: systemctl --version
	I0719 07:36:44.659822    8572 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 07:36:44.661685    8572 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 07:36:44.661727    8572 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0719 07:36:44.665307    8572 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0719 07:36:44.670573    8572 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 07:36:44.670581    8572 start.go:495] detecting cgroup driver to use...
	I0719 07:36:44.670661    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 07:36:44.677385    8572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0719 07:36:44.680800    8572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 07:36:44.683624    8572 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 07:36:44.683652    8572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 07:36:44.686376    8572 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 07:36:44.689593    8572 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 07:36:44.692914    8572 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 07:36:44.695976    8572 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 07:36:44.698840    8572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 07:36:44.701776    8572 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 07:36:44.705080    8572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 07:36:44.708262    8572 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 07:36:44.710785    8572 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 07:36:44.713715    8572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 07:36:44.775543    8572 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 07:36:44.785441    8572 start.go:495] detecting cgroup driver to use...
	I0719 07:36:44.785507    8572 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 07:36:44.791791    8572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 07:36:44.796525    8572 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 07:36:44.802303    8572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 07:36:44.807103    8572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 07:36:44.811845    8572 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 07:36:44.853383    8572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 07:36:44.858764    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 07:36:44.864127    8572 ssh_runner.go:195] Run: which cri-dockerd
	I0719 07:36:44.865346    8572 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 07:36:44.868445    8572 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 07:36:44.873303    8572 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 07:36:44.939465    8572 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 07:36:44.998726    8572 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 07:36:44.998805    8572 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 07:36:45.003816    8572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 07:36:45.064327    8572 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 07:36:46.189477    8572 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.125602625s)
	I0719 07:36:46.189536    8572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 07:36:46.194537    8572 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0719 07:36:46.200679    8572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 07:36:46.205625    8572 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 07:36:46.269928    8572 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 07:36:46.329120    8572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 07:36:46.394640    8572 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 07:36:46.400907    8572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 07:36:46.405931    8572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 07:36:46.477191    8572 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 07:36:46.514570    8572 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 07:36:46.514640    8572 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 07:36:46.516675    8572 start.go:563] Will wait 60s for crictl version
	I0719 07:36:46.516719    8572 ssh_runner.go:195] Run: which crictl
	I0719 07:36:46.518642    8572 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 07:36:46.532614    8572 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0719 07:36:46.532685    8572 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 07:36:46.549793    8572 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 07:36:46.570955    8572 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0719 07:36:46.571078    8572 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0719 07:36:46.572296    8572 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 07:36:46.576263    8572 kubeadm.go:883] updating cluster {Name:stopped-upgrade-109000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51405 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-109000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0719 07:36:46.576309    8572 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0719 07:36:46.576351    8572 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 07:36:46.594619    8572 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 07:36:46.594628    8572 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0719 07:36:46.594670    8572 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 07:36:46.597703    8572 ssh_runner.go:195] Run: which lz4
	I0719 07:36:46.598954    8572 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0719 07:36:46.600132    8572 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 07:36:46.600145    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0719 07:36:47.527478    8572 docker.go:649] duration metric: took 928.905791ms to copy over tarball
	I0719 07:36:47.527543    8572 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 07:36:48.701711    8572 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.17457175s)
	I0719 07:36:48.701725    8572 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 07:36:48.717014    8572 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 07:36:48.720240    8572 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0719 07:36:48.725314    8572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 07:36:48.791136    8572 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 07:36:50.520994    8572 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.730403417s)
	I0719 07:36:50.521108    8572 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 07:36:50.545577    8572 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 07:36:50.545587    8572 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0719 07:36:50.545592    8572 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 07:36:50.551049    8572 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 07:36:50.553160    8572 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 07:36:50.554566    8572 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 07:36:50.554581    8572 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0719 07:36:50.555715    8572 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0719 07:36:50.555861    8572 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 07:36:50.557132    8572 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0719 07:36:50.558396    8572 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0719 07:36:50.558435    8572 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0719 07:36:50.558451    8572 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0719 07:36:50.559534    8572 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0719 07:36:50.559690    8572 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0719 07:36:50.561043    8572 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 07:36:50.561076    8572 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0719 07:36:50.562010    8572 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0719 07:36:50.563308    8572 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 07:36:50.881341    8572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 07:36:50.892327    8572 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0719 07:36:50.892356    8572 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 07:36:50.892405    8572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0719 07:36:50.902739    8572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0719 07:36:50.924510    8572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0719 07:36:50.931171    8572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0719 07:36:50.939544    8572 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0719 07:36:50.939567    8572 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0719 07:36:50.939620    8572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0719 07:36:50.945668    8572 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0719 07:36:50.945688    8572 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0719 07:36:50.945742    8572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0719 07:36:50.950330    8572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0719 07:36:50.955620    8572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0719 07:36:50.972405    8572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0719 07:36:50.982283    8572 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0719 07:36:50.982303    8572 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0719 07:36:50.982351    8572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0719 07:36:50.992198    8572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0719 07:36:50.992410    8572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0719 07:36:51.002274    8572 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0719 07:36:51.002291    8572 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0719 07:36:51.002349    8572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0719 07:36:51.012823    8572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0719 07:36:51.012942    8572 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0719 07:36:51.014416    8572 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0719 07:36:51.014428    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0719 07:36:51.022136    8572 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0719 07:36:51.022145    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0719 07:36:51.037906    8572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0719 07:36:51.058409    8572 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0719 07:36:51.058427    8572 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0719 07:36:51.058444    8572 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0719 07:36:51.058495    8572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W0719 07:36:51.060550    8572 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0719 07:36:51.060662    8572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0719 07:36:51.069314    8572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0719 07:36:51.069453    8572 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0719 07:36:51.076375    8572 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0719 07:36:51.076384    8572 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0719 07:36:51.076395    8572 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 07:36:51.076408    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0719 07:36:51.076437    8572 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0719 07:36:51.102665    8572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0719 07:36:51.102799    8572 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0719 07:36:51.115095    8572 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0719 07:36:51.115122    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0719 07:36:51.193262    8572 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0719 07:36:51.193355    8572 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 07:36:51.200977    8572 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0719 07:36:51.200990    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0719 07:36:51.250681    8572 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0719 07:36:51.250709    8572 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 07:36:51.250770    8572 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 07:36:51.296849    8572 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0719 07:36:51.297965    8572 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0719 07:36:51.298089    8572 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0719 07:36:51.304265    8572 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0719 07:36:51.304294    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0719 07:36:51.382943    8572 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0719 07:36:51.382966    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0719 07:36:51.687657    8572 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0719 07:36:51.687681    8572 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0719 07:36:51.687697    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0719 07:36:51.835547    8572 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0719 07:36:51.835588    8572 cache_images.go:92] duration metric: took 1.290365375s to LoadCachedImages
	W0719 07:36:51.835634    8572 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0719 07:36:51.835641    8572 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0719 07:36:51.835695    8572 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-109000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-109000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 07:36:51.835755    8572 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 07:36:51.849093    8572 cni.go:84] Creating CNI manager for ""
	I0719 07:36:51.849105    8572 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:36:51.849112    8572 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 07:36:51.849122    8572 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-109000 NodeName:stopped-upgrade-109000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 07:36:51.849193    8572 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-109000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 07:36:51.849247    8572 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0719 07:36:51.851990    8572 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 07:36:51.852018    8572 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 07:36:51.855014    8572 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0719 07:36:51.860132    8572 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 07:36:51.864996    8572 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0719 07:36:51.870280    8572 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0719 07:36:51.871629    8572 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 07:36:51.875288    8572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 07:36:51.935446    8572 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 07:36:51.945644    8572 certs.go:68] Setting up /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000 for IP: 10.0.2.15
	I0719 07:36:51.945653    8572 certs.go:194] generating shared ca certs ...
	I0719 07:36:51.945665    8572 certs.go:226] acquiring lock for ca certs: {Name:mk9d0c6de3978c1656d7567742ecf2a49cbc189d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:36:51.945833    8572 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/ca.key
	I0719 07:36:51.945886    8572 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/proxy-client-ca.key
	I0719 07:36:51.945893    8572 certs.go:256] generating profile certs ...
	I0719 07:36:51.945965    8572 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/client.key
	I0719 07:36:51.945982    8572 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.key.97e204ae
	I0719 07:36:51.945994    8572 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.crt.97e204ae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0719 07:36:52.018591    8572 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.crt.97e204ae ...
	I0719 07:36:52.018604    8572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.crt.97e204ae: {Name:mkaee78d5abd5d3da8d808e03ceb3cadfca2eaf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:36:52.019133    8572 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.key.97e204ae ...
	I0719 07:36:52.019139    8572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.key.97e204ae: {Name:mkd0cfab99ed4eb56f5637ac550bdd2dad781a10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:36:52.019296    8572 certs.go:381] copying /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.crt.97e204ae -> /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.crt
	I0719 07:36:52.019436    8572 certs.go:385] copying /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.key.97e204ae -> /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.key
	I0719 07:36:52.019582    8572 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/proxy-client.key
	I0719 07:36:52.019716    8572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/6473.pem (1338 bytes)
	W0719 07:36:52.019750    8572 certs.go:480] ignoring /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/6473_empty.pem, impossibly tiny 0 bytes
	I0719 07:36:52.019758    8572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 07:36:52.019783    8572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem (1078 bytes)
	I0719 07:36:52.019805    8572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem (1123 bytes)
	I0719 07:36:52.019825    8572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/key.pem (1679 bytes)
	I0719 07:36:52.019863    8572 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-5980/.minikube/files/etc/ssl/certs/64732.pem (1708 bytes)
	I0719 07:36:52.020257    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 07:36:52.027183    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 07:36:52.034372    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 07:36:52.041539    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 07:36:52.048055    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 07:36:52.055011    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 07:36:52.062209    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 07:36:52.069178    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 07:36:52.075658    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/files/etc/ssl/certs/64732.pem --> /usr/share/ca-certificates/64732.pem (1708 bytes)
	I0719 07:36:52.082792    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 07:36:52.089827    8572 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/6473.pem --> /usr/share/ca-certificates/6473.pem (1338 bytes)
	I0719 07:36:52.096353    8572 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 07:36:52.101180    8572 ssh_runner.go:195] Run: openssl version
	I0719 07:36:52.102932    8572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6473.pem && ln -fs /usr/share/ca-certificates/6473.pem /etc/ssl/certs/6473.pem"
	I0719 07:36:52.106120    8572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6473.pem
	I0719 07:36:52.107548    8572 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 14:20 /usr/share/ca-certificates/6473.pem
	I0719 07:36:52.107564    8572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6473.pem
	I0719 07:36:52.109391    8572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6473.pem /etc/ssl/certs/51391683.0"
	I0719 07:36:52.112182    8572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64732.pem && ln -fs /usr/share/ca-certificates/64732.pem /etc/ssl/certs/64732.pem"
	I0719 07:36:52.115334    8572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/64732.pem
	I0719 07:36:52.116613    8572 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 14:20 /usr/share/ca-certificates/64732.pem
	I0719 07:36:52.116634    8572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64732.pem
	I0719 07:36:52.118233    8572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/64732.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 07:36:52.121369    8572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 07:36:52.124420    8572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 07:36:52.125765    8572 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 14:32 /usr/share/ca-certificates/minikubeCA.pem
	I0719 07:36:52.125784    8572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 07:36:52.127687    8572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 07:36:52.130993    8572 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 07:36:52.132587    8572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 07:36:52.134815    8572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 07:36:52.136861    8572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 07:36:52.138716    8572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 07:36:52.140434    8572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 07:36:52.142048    8572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 07:36:52.143732    8572 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-109000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51405 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-109000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0719 07:36:52.143795    8572 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 07:36:52.154040    8572 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 07:36:52.157255    8572 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 07:36:52.157261    8572 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 07:36:52.157283    8572 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 07:36:52.159967    8572 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 07:36:52.160262    8572 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-109000" does not appear in /Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:36:52.160358    8572 kubeconfig.go:62] /Users/jenkins/minikube-integration/19302-5980/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-109000" cluster setting kubeconfig missing "stopped-upgrade-109000" context setting]
	I0719 07:36:52.160574    8572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/kubeconfig: {Name:mk0c17b3830610cdae4c834f6bae9631cabc7388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:36:52.161071    8572 kapi.go:59] client config for stopped-upgrade-109000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-5980/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101fd7790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 07:36:52.161423    8572 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 07:36:52.164041    8572 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-109000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0719 07:36:52.164047    8572 kubeadm.go:1160] stopping kube-system containers ...
	I0719 07:36:52.164086    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 07:36:52.174675    8572 docker.go:483] Stopping containers: [42ae714b96fa 9130f74d6072 f86743dde90f a6a24cbd561c 04c61becd2f7 c6e1d72d884c dcec56b7a639 b9e533e1c490]
	I0719 07:36:52.174734    8572 ssh_runner.go:195] Run: docker stop 42ae714b96fa 9130f74d6072 f86743dde90f a6a24cbd561c 04c61becd2f7 c6e1d72d884c dcec56b7a639 b9e533e1c490
	I0719 07:36:52.185098    8572 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 07:36:52.190597    8572 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 07:36:52.193754    8572 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 07:36:52.193760    8572 kubeadm.go:157] found existing configuration files:
	
	I0719 07:36:52.193790    8572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/admin.conf
	I0719 07:36:52.196301    8572 kubeadm.go:163] "https://control-plane.minikube.internal:51405" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 07:36:52.196328    8572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 07:36:52.199089    8572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/kubelet.conf
	I0719 07:36:52.201978    8572 kubeadm.go:163] "https://control-plane.minikube.internal:51405" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 07:36:52.202000    8572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 07:36:52.204596    8572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/controller-manager.conf
	I0719 07:36:52.207258    8572 kubeadm.go:163] "https://control-plane.minikube.internal:51405" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 07:36:52.207276    8572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 07:36:52.210074    8572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/scheduler.conf
	I0719 07:36:52.212621    8572 kubeadm.go:163] "https://control-plane.minikube.internal:51405" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 07:36:52.212638    8572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 07:36:52.215236    8572 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 07:36:52.218165    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 07:36:52.240619    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 07:36:52.655105    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 07:36:52.768454    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 07:36:52.789395    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 07:36:52.812684    8572 api_server.go:52] waiting for apiserver process to appear ...
	I0719 07:36:52.812758    8572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 07:36:53.314723    8572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 07:36:53.814644    8572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 07:36:53.819201    8572 api_server.go:72] duration metric: took 1.006777125s to wait for apiserver process to appear ...
	I0719 07:36:53.819210    8572 api_server.go:88] waiting for apiserver healthz status ...
	I0719 07:36:53.819220    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:36:58.820250    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:36:58.820300    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:03.820256    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:03.820306    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:08.820353    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:08.820398    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:13.820977    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:13.821023    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:18.821661    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:18.821687    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:23.822588    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:23.822604    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:28.823938    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:28.824012    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:33.824524    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:33.824598    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:38.826984    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:38.827026    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:43.829219    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:43.829276    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:48.831559    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:48.831586    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:37:53.831850    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:37:53.832164    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:37:53.858200    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:37:53.858343    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:37:53.874885    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:37:53.874973    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:37:53.888429    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:37:53.888493    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:37:53.899923    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:37:53.899996    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:37:53.910760    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:37:53.910847    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:37:53.921279    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:37:53.921346    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:37:53.931676    8572 logs.go:276] 0 containers: []
	W0719 07:37:53.931687    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:37:53.931745    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:37:53.942134    8572 logs.go:276] 0 containers: []
	W0719 07:37:53.942146    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:37:53.942155    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:37:53.942161    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:37:53.946294    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:37:53.946300    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:37:53.957715    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:37:53.957730    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:37:53.975391    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:37:53.975402    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:37:54.083189    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:37:54.083200    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:37:54.097252    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:37:54.097262    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:37:54.111491    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:37:54.111502    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:37:54.130128    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:37:54.130143    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:37:54.146135    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:37:54.146146    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:37:54.170356    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:37:54.170365    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:37:54.181681    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:37:54.181693    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:37:54.193042    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:37:54.193052    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:37:54.232520    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:37:54.232532    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:37:54.274130    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:37:54.274140    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:37:54.285428    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:37:54.285441    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:37:56.805233    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:01.806273    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:01.806490    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:01.824257    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:38:01.824358    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:01.838009    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:38:01.838075    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:01.849483    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:38:01.849556    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:01.860268    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:38:01.860331    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:01.870542    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:38:01.870616    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:01.883853    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:38:01.883922    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:01.894041    8572 logs.go:276] 0 containers: []
	W0719 07:38:01.894053    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:01.894105    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:01.904360    8572 logs.go:276] 0 containers: []
	W0719 07:38:01.904370    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:38:01.904378    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:01.904384    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:01.942451    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:38:01.942466    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:38:01.954331    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:38:01.954345    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:01.966218    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:01.966228    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:01.970467    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:38:01.970474    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:38:01.981978    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:38:01.981993    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:38:01.999445    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:38:01.999456    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:38:02.013306    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:38:02.013320    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:38:02.051211    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:38:02.051223    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:38:02.066747    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:38:02.066763    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:38:02.080341    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:38:02.080354    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:38:02.095036    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:02.095047    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:02.132189    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:38:02.132201    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:38:02.146031    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:38:02.146042    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:38:02.160656    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:02.160665    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:04.688271    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:09.690559    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:09.690898    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:09.719066    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:38:09.719180    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:09.737893    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:38:09.737977    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:09.751730    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:38:09.751811    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:09.763748    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:38:09.763817    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:09.774639    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:38:09.774706    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:09.785158    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:38:09.785223    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:09.795406    8572 logs.go:276] 0 containers: []
	W0719 07:38:09.795418    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:09.795477    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:09.805840    8572 logs.go:276] 0 containers: []
	W0719 07:38:09.805850    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:38:09.805858    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:38:09.805864    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:38:09.817935    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:38:09.817945    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:38:09.839841    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:09.839851    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:09.865617    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:38:09.865626    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:38:09.876832    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:38:09.876843    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:38:09.891138    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:38:09.891153    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:38:09.937577    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:09.937593    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:09.977283    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:38:09.977310    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:38:09.990372    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:38:09.990385    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:38:10.012403    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:10.012414    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:10.016565    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:38:10.016575    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:38:10.031294    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:38:10.031306    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:38:10.051215    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:38:10.051225    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:38:10.066210    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:38:10.066224    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:10.077672    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:10.077687    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:12.615586    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:17.617965    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:17.618387    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:17.656381    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:38:17.656514    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:17.679130    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:38:17.679223    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:17.695856    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:38:17.695925    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:17.707792    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:38:17.707859    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:17.719043    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:38:17.719105    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:17.729668    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:38:17.729736    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:17.740261    8572 logs.go:276] 0 containers: []
	W0719 07:38:17.740271    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:17.740325    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:17.750408    8572 logs.go:276] 0 containers: []
	W0719 07:38:17.750418    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:38:17.750427    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:38:17.750432    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:38:17.765394    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:38:17.765405    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:38:17.777498    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:38:17.777509    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:38:17.793071    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:17.793083    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:17.828620    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:38:17.828633    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:38:17.842907    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:38:17.842917    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:38:17.880431    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:38:17.880447    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:17.891911    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:17.891925    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:17.895916    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:38:17.895922    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:38:17.909476    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:38:17.909490    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:38:17.923597    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:17.923610    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:17.947000    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:17.947007    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:17.984399    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:38:17.984405    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:38:17.996297    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:38:17.996309    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:38:18.010184    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:38:18.010196    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:38:20.535248    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:25.537705    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:25.537939    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:25.554815    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:38:25.554893    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:25.568264    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:38:25.568334    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:25.580624    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:38:25.580679    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:25.590879    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:38:25.590949    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:25.601284    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:38:25.601343    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:25.611838    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:38:25.611901    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:25.622046    8572 logs.go:276] 0 containers: []
	W0719 07:38:25.622061    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:25.622116    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:25.632878    8572 logs.go:276] 0 containers: []
	W0719 07:38:25.632890    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:38:25.632898    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:38:25.632903    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:38:25.650934    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:38:25.650944    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:38:25.662319    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:38:25.662329    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:38:25.674178    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:38:25.674189    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:38:25.695632    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:25.695642    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:25.735003    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:25.735013    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:25.739217    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:38:25.739223    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:38:25.754385    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:25.754399    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:25.790993    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:38:25.791006    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:38:25.805295    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:38:25.805306    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:38:25.816879    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:38:25.816890    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:38:25.830561    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:25.830574    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:25.855554    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:38:25.855563    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:25.869218    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:38:25.869229    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:38:25.907290    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:38:25.907307    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:38:28.428193    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:33.430459    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:33.430690    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:33.451065    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:38:33.451155    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:33.471733    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:38:33.471801    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:33.487626    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:38:33.487683    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:33.498153    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:38:33.498225    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:33.508404    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:38:33.508471    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:33.523202    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:38:33.523268    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:33.534172    8572 logs.go:276] 0 containers: []
	W0719 07:38:33.534183    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:33.534241    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:33.544800    8572 logs.go:276] 0 containers: []
	W0719 07:38:33.544810    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:38:33.544819    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:38:33.544824    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:38:33.556240    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:38:33.556253    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:33.567802    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:38:33.567813    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:38:33.582281    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:38:33.582293    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:38:33.595787    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:38:33.595800    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:38:33.610533    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:38:33.610544    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:38:33.621582    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:33.621592    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:33.658446    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:38:33.658454    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:38:33.698416    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:33.698427    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:33.733763    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:38:33.733774    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:38:33.749130    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:38:33.749140    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:38:33.767723    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:33.767733    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:33.772536    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:38:33.772542    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:38:33.784296    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:38:33.784308    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:38:33.801803    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:33.801813    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:36.328371    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:41.330610    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:41.330689    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:41.341357    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:38:41.341421    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:41.353278    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:38:41.353340    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:41.363900    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:38:41.363969    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:41.374417    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:38:41.374485    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:41.385235    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:38:41.385297    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:41.395356    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:38:41.395415    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:41.405565    8572 logs.go:276] 0 containers: []
	W0719 07:38:41.405580    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:41.405644    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:41.416093    8572 logs.go:276] 0 containers: []
	W0719 07:38:41.416105    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:38:41.416112    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:38:41.416118    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:38:41.453767    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:38:41.453778    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:38:41.467500    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:38:41.467513    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:38:41.479352    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:38:41.479363    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:38:41.498357    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:38:41.498366    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:38:41.512254    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:38:41.512263    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:41.523564    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:41.523575    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:41.558385    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:41.558399    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:41.562967    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:38:41.562975    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:38:41.577584    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:38:41.577594    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:38:41.593606    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:41.593621    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:41.618239    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:41.618248    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:41.656942    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:38:41.656950    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:38:41.668297    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:38:41.668313    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:38:41.680249    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:38:41.680260    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:38:44.197242    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:49.199621    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:49.199783    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:49.214666    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:38:49.214744    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:49.227288    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:38:49.227365    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:49.238104    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:38:49.238167    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:49.248606    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:38:49.248668    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:49.258933    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:38:49.259007    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:49.271129    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:38:49.271201    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:49.281489    8572 logs.go:276] 0 containers: []
	W0719 07:38:49.281498    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:49.281555    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:49.291765    8572 logs.go:276] 0 containers: []
	W0719 07:38:49.291778    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:38:49.291787    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:38:49.291793    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:38:49.303255    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:38:49.303266    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:38:49.314632    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:38:49.314645    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:38:49.331906    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:38:49.331916    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:38:49.349267    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:49.349277    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:49.374582    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:49.374590    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:49.410808    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:38:49.410823    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:38:49.424613    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:38:49.424624    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:38:49.436287    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:38:49.436301    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:38:49.450953    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:38:49.450963    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:38:49.464695    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:38:49.464707    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:38:49.502101    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:38:49.502113    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:38:49.516661    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:49.516672    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:49.554681    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:49.554692    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:49.558916    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:38:49.558923    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:52.072116    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:38:57.074284    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:38:57.074439    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:38:57.087786    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:38:57.087864    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:38:57.102619    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:38:57.102684    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:38:57.112848    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:38:57.112916    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:38:57.129119    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:38:57.129191    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:38:57.139342    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:38:57.139407    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:38:57.149653    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:38:57.149721    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:38:57.159760    8572 logs.go:276] 0 containers: []
	W0719 07:38:57.159772    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:38:57.159830    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:38:57.169985    8572 logs.go:276] 0 containers: []
	W0719 07:38:57.169997    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:38:57.170006    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:38:57.170013    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:38:57.204596    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:38:57.204608    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:38:57.216517    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:38:57.216530    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:38:57.231833    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:38:57.231842    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:38:57.252941    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:38:57.252955    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:38:57.291928    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:38:57.291935    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:38:57.306227    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:38:57.306241    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:38:57.323385    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:38:57.323398    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:38:57.337810    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:38:57.337824    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:38:57.362318    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:38:57.362332    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:38:57.366330    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:38:57.366336    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:38:57.377907    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:38:57.377920    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:38:57.391602    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:38:57.391614    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:38:57.430001    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:38:57.430016    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:38:57.444182    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:38:57.444197    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:38:59.960331    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:04.962645    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:04.962839    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:04.978550    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:39:04.978634    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:04.990397    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:39:04.990472    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:05.000622    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:39:05.000688    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:05.011448    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:39:05.011517    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:05.026823    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:39:05.026885    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:05.037357    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:39:05.037423    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:05.052775    8572 logs.go:276] 0 containers: []
	W0719 07:39:05.052786    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:05.052840    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:05.062752    8572 logs.go:276] 0 containers: []
	W0719 07:39:05.062762    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:39:05.062772    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:39:05.062777    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:39:05.080329    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:39:05.080339    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:05.092027    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:39:05.092039    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:39:05.134851    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:39:05.134868    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:39:05.146350    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:39:05.146361    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:39:05.159715    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:05.159724    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:05.164177    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:39:05.164185    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:39:05.177997    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:39:05.178006    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:39:05.191692    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:39:05.191702    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:39:05.203387    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:39:05.203397    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:39:05.218407    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:39:05.218418    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:39:05.230836    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:05.230845    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:05.253997    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:05.254004    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:05.292557    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:39:05.292572    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:39:05.308263    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:05.308273    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:07.852806    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:12.855016    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:12.855166    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:12.866296    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:39:12.866367    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:12.876707    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:39:12.876779    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:12.887015    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:39:12.887083    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:12.897552    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:39:12.897624    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:12.908132    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:39:12.908203    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:12.918450    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:39:12.918515    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:12.928212    8572 logs.go:276] 0 containers: []
	W0719 07:39:12.928224    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:12.928280    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:12.938446    8572 logs.go:276] 0 containers: []
	W0719 07:39:12.938455    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:39:12.938463    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:39:12.938468    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:39:12.956132    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:39:12.956145    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:39:12.968292    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:39:12.968303    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:39:12.979904    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:39:12.979917    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:39:12.994718    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:39:12.994727    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:39:13.015694    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:39:13.015708    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:39:13.029474    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:13.029487    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:13.033493    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:13.033499    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:13.067482    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:39:13.067496    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:39:13.081720    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:39:13.081729    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:13.093735    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:39:13.093750    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:39:13.109720    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:13.109732    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:13.132645    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:13.132657    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:13.168893    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:39:13.168900    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:39:13.182879    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:39:13.182892    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:39:15.721271    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:20.723610    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:20.723740    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:20.740291    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:39:20.740364    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:20.751058    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:39:20.751124    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:20.761225    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:39:20.761289    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:20.774467    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:39:20.774532    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:20.785255    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:39:20.785323    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:20.795694    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:39:20.795751    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:20.806258    8572 logs.go:276] 0 containers: []
	W0719 07:39:20.806271    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:20.806328    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:20.815926    8572 logs.go:276] 0 containers: []
	W0719 07:39:20.815939    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:39:20.815947    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:39:20.815954    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:39:20.827489    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:39:20.827502    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:39:20.842328    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:39:20.842338    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:39:20.861099    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:39:20.861115    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:39:20.874997    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:39:20.875007    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:39:20.911560    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:39:20.911571    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:39:20.923012    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:20.923028    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:20.946603    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:39:20.946611    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:20.962258    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:20.962269    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:20.999540    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:20.999548    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:21.004054    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:39:21.004063    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:39:21.020398    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:39:21.020409    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:39:21.035530    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:39:21.035544    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:39:21.053236    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:21.053245    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:21.087868    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:39:21.087880    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:39:23.606140    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:28.608434    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:28.608544    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:28.621999    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:39:28.622074    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:28.633370    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:39:28.633442    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:28.643989    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:39:28.644060    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:28.658811    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:39:28.658875    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:28.669349    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:39:28.669420    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:28.680066    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:39:28.680125    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:28.690026    8572 logs.go:276] 0 containers: []
	W0719 07:39:28.690036    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:28.690094    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:28.700213    8572 logs.go:276] 0 containers: []
	W0719 07:39:28.700226    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:39:28.700235    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:39:28.700240    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:39:28.714432    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:39:28.714443    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:39:28.729114    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:39:28.729127    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:39:28.741517    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:39:28.741529    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:39:28.755521    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:28.755532    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:28.794456    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:28.794474    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:28.798690    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:28.798698    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:28.838025    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:39:28.838036    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:39:28.881245    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:39:28.881258    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:39:28.893204    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:28.893215    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:28.916509    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:39:28.916518    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:39:28.930787    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:39:28.930800    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:28.943280    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:39:28.943294    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:39:28.954636    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:39:28.954650    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:39:28.970360    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:39:28.970371    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:39:31.494043    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:36.496396    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:36.496568    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:36.510609    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:39:36.510688    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:36.521748    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:39:36.521821    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:36.532112    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:39:36.532176    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:36.543932    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:39:36.543997    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:36.561088    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:39:36.561156    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:36.571981    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:39:36.572043    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:36.581960    8572 logs.go:276] 0 containers: []
	W0719 07:39:36.581972    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:36.582031    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:36.592686    8572 logs.go:276] 0 containers: []
	W0719 07:39:36.592697    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:39:36.592704    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:36.592711    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:36.629438    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:36.629446    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:36.664512    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:39:36.664523    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:39:36.679681    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:39:36.679694    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:39:36.720006    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:39:36.720018    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:36.731915    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:39:36.731926    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:39:36.748889    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:36.748900    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:36.753019    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:39:36.753027    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:39:36.766552    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:39:36.766561    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:39:36.777445    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:39:36.777456    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:39:36.793330    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:39:36.793344    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:39:36.807815    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:36.807833    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:36.833903    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:39:36.833919    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:39:36.850205    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:39:36.850219    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:39:36.863533    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:39:36.863547    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:39:39.383590    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:44.385926    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:44.386116    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:44.406377    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:39:44.406486    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:44.420732    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:39:44.420810    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:44.432045    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:39:44.432109    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:44.442610    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:39:44.442678    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:44.453071    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:39:44.453136    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:44.463532    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:39:44.463602    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:44.473495    8572 logs.go:276] 0 containers: []
	W0719 07:39:44.473509    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:44.473569    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:44.483831    8572 logs.go:276] 0 containers: []
	W0719 07:39:44.483842    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:39:44.483850    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:39:44.483856    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:39:44.496089    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:39:44.496099    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:39:44.511284    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:39:44.511294    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:39:44.525939    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:39:44.525949    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:39:44.539644    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:39:44.539656    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:39:44.554413    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:39:44.554426    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:39:44.574252    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:39:44.574262    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:44.586522    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:44.586534    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:44.591516    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:39:44.591523    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:39:44.606388    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:39:44.606400    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:39:44.647648    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:39:44.647660    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:39:44.664467    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:44.664480    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:44.699047    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:39:44.699058    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:39:44.713249    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:44.713264    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:44.751907    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:44.751914    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:47.277467    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:39:52.280044    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:39:52.280277    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:39:52.310804    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:39:52.310926    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:39:52.326788    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:39:52.326868    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:39:52.339324    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:39:52.339385    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:39:52.350975    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:39:52.351046    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:39:52.361794    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:39:52.361855    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:39:52.373006    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:39:52.373072    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:39:52.383211    8572 logs.go:276] 0 containers: []
	W0719 07:39:52.383222    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:39:52.383306    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:39:52.398650    8572 logs.go:276] 0 containers: []
	W0719 07:39:52.398662    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:39:52.398670    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:39:52.398676    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:39:52.403417    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:39:52.403423    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:39:52.417622    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:39:52.417633    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:39:52.428502    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:39:52.428514    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:39:52.440326    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:39:52.440337    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:39:52.480185    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:39:52.480197    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:39:52.521479    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:39:52.521489    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:39:52.535331    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:39:52.535344    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:39:52.551079    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:39:52.551091    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:39:52.564800    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:39:52.564811    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:39:52.588033    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:39:52.588045    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:39:52.600321    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:39:52.600332    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:39:52.638689    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:39:52.638704    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:39:52.653646    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:39:52.653658    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:39:52.665349    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:39:52.665359    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:39:55.184354    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:00.186704    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:00.186876    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:00.205492    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:40:00.205588    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:00.219975    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:40:00.220048    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:00.232279    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:40:00.232350    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:00.245164    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:40:00.245228    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:00.255803    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:40:00.255877    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:00.266574    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:40:00.266636    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:00.276536    8572 logs.go:276] 0 containers: []
	W0719 07:40:00.276547    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:00.276606    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:00.286488    8572 logs.go:276] 0 containers: []
	W0719 07:40:00.286499    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:40:00.286508    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:40:00.286516    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:40:00.304329    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:40:00.304341    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:40:00.318555    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:40:00.318565    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:40:00.330024    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:00.330036    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:00.367818    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:40:00.367827    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:40:00.379348    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:00.379360    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:00.383600    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:40:00.383609    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:40:00.399048    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:40:00.399063    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:40:00.437186    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:40:00.437199    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:40:00.451529    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:40:00.451547    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:40:00.465759    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:40:00.465772    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:40:00.477454    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:00.477463    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:00.500147    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:40:00.500154    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:00.511466    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:00.511478    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:00.548138    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:40:00.548146    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:40:03.066582    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:08.068842    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:08.068991    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:08.084930    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:40:08.085005    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:08.098318    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:40:08.098390    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:08.109338    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:40:08.109408    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:08.123603    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:40:08.123672    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:08.134938    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:40:08.135005    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:08.146248    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:40:08.146317    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:08.156841    8572 logs.go:276] 0 containers: []
	W0719 07:40:08.156851    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:08.156906    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:08.167031    8572 logs.go:276] 0 containers: []
	W0719 07:40:08.167042    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:40:08.167049    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:40:08.167055    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:08.179653    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:40:08.179664    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:40:08.193951    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:40:08.193961    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:40:08.205447    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:40:08.205458    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:40:08.223347    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:08.223357    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:08.227632    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:40:08.227640    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:40:08.240142    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:40:08.240151    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:40:08.257910    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:40:08.257921    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:40:08.271741    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:40:08.271751    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:40:08.308660    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:40:08.308673    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:40:08.322885    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:40:08.322899    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:40:08.337946    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:08.337955    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:08.362400    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:08.362416    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:08.401394    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:08.401404    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:08.436207    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:40:08.436218    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:40:10.947820    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:15.950046    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:15.950189    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:15.967002    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:40:15.967085    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:15.979711    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:40:15.979780    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:15.991662    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:40:15.991722    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:16.002860    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:40:16.002924    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:16.014269    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:40:16.014326    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:16.025262    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:40:16.025353    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:16.035838    8572 logs.go:276] 0 containers: []
	W0719 07:40:16.035847    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:16.035898    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:16.046170    8572 logs.go:276] 0 containers: []
	W0719 07:40:16.046183    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:40:16.046191    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:40:16.046197    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:40:16.057914    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:40:16.057925    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:40:16.069928    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:40:16.069940    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:40:16.083575    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:16.083585    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:16.107576    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:40:16.107587    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:16.120324    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:40:16.120338    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:40:16.158751    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:40:16.158762    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:40:16.169947    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:16.169958    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:16.208983    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:16.208994    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:16.213353    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:16.213362    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:16.247526    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:40:16.247540    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:40:16.266872    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:40:16.266882    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:40:16.281490    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:40:16.281500    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:40:16.295383    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:40:16.295396    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:40:16.311418    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:40:16.311429    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:40:18.828194    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:23.830460    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:23.830768    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:23.864046    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:40:23.864133    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:23.879846    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:40:23.879912    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:23.892214    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:40:23.892270    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:23.903631    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:40:23.903691    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:23.914427    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:40:23.914479    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:23.924788    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:40:23.924847    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:23.935271    8572 logs.go:276] 0 containers: []
	W0719 07:40:23.935281    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:23.935326    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:23.945509    8572 logs.go:276] 0 containers: []
	W0719 07:40:23.945522    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:40:23.945529    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:23.945535    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:23.949858    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:23.949867    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:23.984510    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:40:23.984524    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:40:23.998966    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:40:23.998979    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:40:24.010302    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:40:24.010311    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:40:24.022028    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:40:24.022038    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:40:24.037485    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:40:24.037498    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:40:24.055442    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:40:24.055452    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:40:24.069020    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:40:24.069033    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:40:24.084170    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:40:24.084184    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:40:24.124078    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:40:24.124089    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:40:24.136185    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:24.136196    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:24.175866    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:40:24.175874    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:40:24.190321    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:24.190331    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:24.212493    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:40:24.212501    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:26.726172    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:31.728657    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:31.729067    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:31.772904    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:40:31.773009    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:31.788776    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:40:31.788855    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:31.802054    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:40:31.802127    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:31.817847    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:40:31.817917    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:31.828294    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:40:31.828370    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:31.839943    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:40:31.840008    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:31.850894    8572 logs.go:276] 0 containers: []
	W0719 07:40:31.850904    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:31.850968    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:31.861225    8572 logs.go:276] 0 containers: []
	W0719 07:40:31.861237    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:40:31.861244    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:40:31.861250    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:31.875251    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:40:31.875266    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:40:31.890119    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:31.890128    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:31.913609    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:40:31.913619    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:40:31.924846    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:31.924855    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:31.929241    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:40:31.929250    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:40:31.940398    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:40:31.940412    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:40:31.981849    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:40:31.981861    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:40:31.996474    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:40:31.996483    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:40:32.008897    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:40:32.008909    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:40:32.023846    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:32.023860    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:32.061351    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:32.061360    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:32.096455    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:40:32.096469    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:40:32.113859    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:40:32.113871    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:40:32.128723    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:40:32.128737    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:40:34.647940    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:39.650233    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:39.650460    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:39.680439    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:40:39.680549    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:39.698322    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:40:39.698394    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:39.712699    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:40:39.712761    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:39.724666    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:40:39.724737    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:39.735948    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:40:39.736011    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:39.747081    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:40:39.747163    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:39.757884    8572 logs.go:276] 0 containers: []
	W0719 07:40:39.757896    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:39.757959    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:39.768079    8572 logs.go:276] 0 containers: []
	W0719 07:40:39.768092    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:40:39.768100    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:39.768106    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:39.807798    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:39.807808    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:39.812405    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:40:39.812413    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:40:39.824964    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:40:39.824976    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:40:39.839017    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:40:39.839028    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:40:39.853604    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:40:39.853614    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:40:39.866402    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:39.866412    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:39.891212    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:39.891226    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:39.955141    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:40:39.955159    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:40:39.996265    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:40:39.996282    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:40:40.011325    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:40:40.011335    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:40:40.022625    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:40:40.022636    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:40:40.040484    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:40:40.040493    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:40.052122    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:40:40.052138    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:40:40.066561    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:40:40.066570    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:40:42.582562    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:47.584906    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:47.585083    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:40:47.605952    8572 logs.go:276] 2 containers: [4c600183ec3b 42ae714b96fa]
	I0719 07:40:47.606046    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:40:47.621561    8572 logs.go:276] 2 containers: [509d0c71d44f 9130f74d6072]
	I0719 07:40:47.621637    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:40:47.633486    8572 logs.go:276] 1 containers: [f87a9b476623]
	I0719 07:40:47.633555    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:40:47.644016    8572 logs.go:276] 2 containers: [2604ebddac5e f86743dde90f]
	I0719 07:40:47.644086    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:40:47.654829    8572 logs.go:276] 1 containers: [dfb97a6d90da]
	I0719 07:40:47.654890    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:40:47.665019    8572 logs.go:276] 2 containers: [c652fffb9d82 04c61becd2f7]
	I0719 07:40:47.665079    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:40:47.675340    8572 logs.go:276] 0 containers: []
	W0719 07:40:47.675354    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:40:47.675410    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:40:47.686291    8572 logs.go:276] 0 containers: []
	W0719 07:40:47.686304    8572 logs.go:278] No container was found matching "storage-provisioner"
	I0719 07:40:47.686312    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:40:47.686319    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:40:47.708709    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:40:47.708716    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:40:47.720211    8572 logs.go:123] Gathering logs for etcd [9130f74d6072] ...
	I0719 07:40:47.720221    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9130f74d6072"
	I0719 07:40:47.740741    8572 logs.go:123] Gathering logs for coredns [f87a9b476623] ...
	I0719 07:40:47.740752    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f87a9b476623"
	I0719 07:40:47.752172    8572 logs.go:123] Gathering logs for kube-proxy [dfb97a6d90da] ...
	I0719 07:40:47.752185    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfb97a6d90da"
	I0719 07:40:47.764337    8572 logs.go:123] Gathering logs for kube-controller-manager [04c61becd2f7] ...
	I0719 07:40:47.764348    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04c61becd2f7"
	I0719 07:40:47.777994    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:40:47.778007    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 07:40:47.814984    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:40:47.814992    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:40:47.819175    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:40:47.819184    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:40:47.858706    8572 logs.go:123] Gathering logs for kube-apiserver [4c600183ec3b] ...
	I0719 07:40:47.858715    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4c600183ec3b"
	I0719 07:40:47.876879    8572 logs.go:123] Gathering logs for etcd [509d0c71d44f] ...
	I0719 07:40:47.876889    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 509d0c71d44f"
	I0719 07:40:47.894857    8572 logs.go:123] Gathering logs for kube-scheduler [2604ebddac5e] ...
	I0719 07:40:47.894868    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2604ebddac5e"
	I0719 07:40:47.907299    8572 logs.go:123] Gathering logs for kube-apiserver [42ae714b96fa] ...
	I0719 07:40:47.907313    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42ae714b96fa"
	I0719 07:40:47.945053    8572 logs.go:123] Gathering logs for kube-scheduler [f86743dde90f] ...
	I0719 07:40:47.945064    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86743dde90f"
	I0719 07:40:47.960168    8572 logs.go:123] Gathering logs for kube-controller-manager [c652fffb9d82] ...
	I0719 07:40:47.960180    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c652fffb9d82"
	I0719 07:40:50.483599    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:40:55.485907    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:40:55.485997    8572 kubeadm.go:597] duration metric: took 4m3.335150958s to restartPrimaryControlPlane
	W0719 07:40:55.486062    8572 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0719 07:40:55.486098    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0719 07:40:56.411403    8572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 07:40:56.416373    8572 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 07:40:56.419000    8572 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 07:40:56.421753    8572 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 07:40:56.421759    8572 kubeadm.go:157] found existing configuration files:
	
	I0719 07:40:56.421785    8572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/admin.conf
	I0719 07:40:56.424230    8572 kubeadm.go:163] "https://control-plane.minikube.internal:51405" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 07:40:56.424261    8572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 07:40:56.426925    8572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/kubelet.conf
	I0719 07:40:56.430160    8572 kubeadm.go:163] "https://control-plane.minikube.internal:51405" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 07:40:56.430188    8572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 07:40:56.433034    8572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/controller-manager.conf
	I0719 07:40:56.435399    8572 kubeadm.go:163] "https://control-plane.minikube.internal:51405" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 07:40:56.435416    8572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 07:40:56.438425    8572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/scheduler.conf
	I0719 07:40:56.441609    8572 kubeadm.go:163] "https://control-plane.minikube.internal:51405" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51405 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 07:40:56.441628    8572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 07:40:56.444166    8572 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 07:40:56.459842    8572 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0719 07:40:56.460022    8572 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 07:40:56.507913    8572 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 07:40:56.507997    8572 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 07:40:56.508117    8572 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 07:40:56.562302    8572 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 07:40:56.568454    8572 out.go:204]   - Generating certificates and keys ...
	I0719 07:40:56.568490    8572 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 07:40:56.568522    8572 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 07:40:56.568572    8572 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 07:40:56.568602    8572 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 07:40:56.568635    8572 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 07:40:56.568659    8572 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 07:40:56.568690    8572 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 07:40:56.568720    8572 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 07:40:56.568772    8572 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 07:40:56.568811    8572 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 07:40:56.568834    8572 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 07:40:56.568865    8572 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 07:40:56.617960    8572 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 07:40:56.727489    8572 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 07:40:56.930677    8572 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 07:40:57.024793    8572 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 07:40:57.056485    8572 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 07:40:57.056896    8572 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 07:40:57.056920    8572 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 07:40:57.124274    8572 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 07:40:57.127621    8572 out.go:204]   - Booting up control plane ...
	I0719 07:40:57.127670    8572 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 07:40:57.127721    8572 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 07:40:57.127832    8572 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 07:40:57.137382    8572 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 07:40:57.138045    8572 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 07:41:01.640121    8572 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501626 seconds
	I0719 07:41:01.640350    8572 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 07:41:01.644563    8572 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 07:41:02.153625    8572 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 07:41:02.153871    8572 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-109000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 07:41:02.657809    8572 kubeadm.go:310] [bootstrap-token] Using token: 9qmjl0.4axsfkhx88jmp3qy
	I0719 07:41:02.670045    8572 out.go:204]   - Configuring RBAC rules ...
	I0719 07:41:02.670127    8572 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 07:41:02.670178    8572 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 07:41:02.670804    8572 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 07:41:02.671686    8572 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 07:41:02.672747    8572 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 07:41:02.673533    8572 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 07:41:02.677752    8572 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 07:41:02.836215    8572 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 07:41:03.061224    8572 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 07:41:03.061651    8572 kubeadm.go:310] 
	I0719 07:41:03.061683    8572 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 07:41:03.061688    8572 kubeadm.go:310] 
	I0719 07:41:03.061730    8572 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 07:41:03.061740    8572 kubeadm.go:310] 
	I0719 07:41:03.061756    8572 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 07:41:03.061782    8572 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 07:41:03.061828    8572 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 07:41:03.061834    8572 kubeadm.go:310] 
	I0719 07:41:03.061858    8572 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 07:41:03.061862    8572 kubeadm.go:310] 
	I0719 07:41:03.061883    8572 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 07:41:03.061886    8572 kubeadm.go:310] 
	I0719 07:41:03.061911    8572 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 07:41:03.061946    8572 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 07:41:03.061982    8572 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 07:41:03.061989    8572 kubeadm.go:310] 
	I0719 07:41:03.062036    8572 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 07:41:03.062077    8572 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 07:41:03.062080    8572 kubeadm.go:310] 
	I0719 07:41:03.062122    8572 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9qmjl0.4axsfkhx88jmp3qy \
	I0719 07:41:03.062175    8572 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c0079416ee672a46ea5c9a53cd13d3e504fe5042c2b22c9e2bf67c89ce7740e7 \
	I0719 07:41:03.062187    8572 kubeadm.go:310] 	--control-plane 
	I0719 07:41:03.062189    8572 kubeadm.go:310] 
	I0719 07:41:03.062230    8572 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 07:41:03.062235    8572 kubeadm.go:310] 
	I0719 07:41:03.062277    8572 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9qmjl0.4axsfkhx88jmp3qy \
	I0719 07:41:03.062341    8572 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c0079416ee672a46ea5c9a53cd13d3e504fe5042c2b22c9e2bf67c89ce7740e7 
	I0719 07:41:03.062616    8572 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 07:41:03.062626    8572 cni.go:84] Creating CNI manager for ""
	I0719 07:41:03.062635    8572 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:41:03.070161    8572 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 07:41:03.074264    8572 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 07:41:03.077246    8572 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 07:41:03.082132    8572 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 07:41:03.082207    8572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-109000 minikube.k8s.io/updated_at=2024_07_19T07_41_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=405f18a5a62536f49733886cdc838e8ec36975de minikube.k8s.io/name=stopped-upgrade-109000 minikube.k8s.io/primary=true
	I0719 07:41:03.082207    8572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 07:41:03.113627    8572 kubeadm.go:1113] duration metric: took 31.457541ms to wait for elevateKubeSystemPrivileges
	I0719 07:41:03.126417    8572 ops.go:34] apiserver oom_adj: -16
	I0719 07:41:03.126431    8572 kubeadm.go:394] duration metric: took 4m10.989210041s to StartCluster
	I0719 07:41:03.126445    8572 settings.go:142] acquiring lock: {Name:mk67df71d562cbffe9f3bde88489898c395cdfc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:41:03.126537    8572 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:41:03.126934    8572 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/kubeconfig: {Name:mk0c17b3830610cdae4c834f6bae9631cabc7388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:41:03.127143    8572 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:41:03.127157    8572 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 07:41:03.127191    8572 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-109000"
	I0719 07:41:03.127207    8572 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-109000"
	W0719 07:41:03.127210    8572 addons.go:243] addon storage-provisioner should already be in state true
	I0719 07:41:03.127224    8572 host.go:66] Checking if "stopped-upgrade-109000" exists ...
	I0719 07:41:03.127225    8572 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-109000"
	I0719 07:41:03.127241    8572 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-109000"
	I0719 07:41:03.127224    8572 config.go:182] Loaded profile config "stopped-upgrade-109000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:41:03.131226    8572 out.go:177] * Verifying Kubernetes components...
	I0719 07:41:03.131888    8572 kapi.go:59] client config for stopped-upgrade-109000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/stopped-upgrade-109000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-5980/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101fd7790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 07:41:03.135531    8572 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-109000"
	W0719 07:41:03.135536    8572 addons.go:243] addon default-storageclass should already be in state true
	I0719 07:41:03.135545    8572 host.go:66] Checking if "stopped-upgrade-109000" exists ...
	I0719 07:41:03.136083    8572 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 07:41:03.136090    8572 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 07:41:03.136095    8572 sshutil.go:53] new ssh client: &{IP:localhost Port:51371 SSHKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/stopped-upgrade-109000/id_rsa Username:docker}
	I0719 07:41:03.139180    8572 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 07:41:03.143206    8572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 07:41:03.144378    8572 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 07:41:03.144383    8572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 07:41:03.144387    8572 sshutil.go:53] new ssh client: &{IP:localhost Port:51371 SSHKeyPath:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/stopped-upgrade-109000/id_rsa Username:docker}
	I0719 07:41:03.220988    8572 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 07:41:03.226145    8572 api_server.go:52] waiting for apiserver process to appear ...
	I0719 07:41:03.226186    8572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 07:41:03.232157    8572 api_server.go:72] duration metric: took 105.0045ms to wait for apiserver process to appear ...
	I0719 07:41:03.232165    8572 api_server.go:88] waiting for apiserver healthz status ...
	I0719 07:41:03.232173    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:03.252036    8572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 07:41:03.309652    8572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 07:41:08.234333    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:08.234379    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:13.234758    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:13.234782    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:18.235107    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:18.235131    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:23.235545    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:23.235590    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:28.236712    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:28.236759    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:33.237680    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:33.237733    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0719 07:41:33.631398    8572 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0719 07:41:33.635475    8572 out.go:177] * Enabled addons: storage-provisioner
	I0719 07:41:33.643463    8572 addons.go:510] duration metric: took 30.516593s for enable addons: enabled=[storage-provisioner]
	I0719 07:41:38.238923    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:38.238976    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:43.240515    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:43.240552    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:48.242468    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:48.242512    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:53.244671    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:53.244708    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:41:58.246846    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:41:58.246873    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:42:03.249024    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:42:03.249252    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:42:03.289375    8572 logs.go:276] 1 containers: [32f11fae8e1d]
	I0719 07:42:03.289446    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:42:03.301560    8572 logs.go:276] 1 containers: [747a798e619e]
	I0719 07:42:03.301640    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:42:03.312447    8572 logs.go:276] 2 containers: [59864643b4b0 455dff02ae0e]
	I0719 07:42:03.312514    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:42:03.322751    8572 logs.go:276] 1 containers: [7fd2650e21f2]
	I0719 07:42:03.322819    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:42:03.333598    8572 logs.go:276] 1 containers: [443134764c2a]
	I0719 07:42:03.333670    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:42:03.344214    8572 logs.go:276] 1 containers: [a437699fc8c1]
	I0719 07:42:03.344278    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:42:03.354452    8572 logs.go:276] 0 containers: []
	W0719 07:42:03.354464    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:42:03.354518    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:42:03.364988    8572 logs.go:276] 1 containers: [1b1d14b572c2]
	I0719 07:42:03.365004    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:42:03.365016    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:42:03.401288    8572 logs.go:123] Gathering logs for kube-apiserver [32f11fae8e1d] ...
	I0719 07:42:03.401300    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32f11fae8e1d"
	I0719 07:42:03.416034    8572 logs.go:123] Gathering logs for etcd [747a798e619e] ...
	I0719 07:42:03.416046    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 747a798e619e"
	I0719 07:42:03.429981    8572 logs.go:123] Gathering logs for coredns [59864643b4b0] ...
	I0719 07:42:03.429994    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59864643b4b0"
	I0719 07:42:03.441668    8572 logs.go:123] Gathering logs for coredns [455dff02ae0e] ...
	I0719 07:42:03.441677    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 455dff02ae0e"
	I0719 07:42:03.453550    8572 logs.go:123] Gathering logs for kube-proxy [443134764c2a] ...
	I0719 07:42:03.453569    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 443134764c2a"
	I0719 07:42:03.465638    8572 logs.go:123] Gathering logs for kube-controller-manager [a437699fc8c1] ...
	I0719 07:42:03.465650    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a437699fc8c1"
	I0719 07:42:03.484910    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:42:03.484921    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 07:42:03.522282    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:42:03.522375    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:42:03.523666    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:42:03.523674    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:42:03.539156    8572 logs.go:123] Gathering logs for kube-scheduler [7fd2650e21f2] ...
	I0719 07:42:03.539168    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd2650e21f2"
	I0719 07:42:03.553836    8572 logs.go:123] Gathering logs for storage-provisioner [1b1d14b572c2] ...
	I0719 07:42:03.553846    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1d14b572c2"
	I0719 07:42:03.565115    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:42:03.565129    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:42:03.589935    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:42:03.589944    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:42:03.594080    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:42:03.594089    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 07:42:03.594111    8572 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0719 07:42:03.594115    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:42:03.594118    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:42:03.594123    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:42:03.594131    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:42:13.598301    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:42:18.601210    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:42:18.601583    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:42:18.642372    8572 logs.go:276] 1 containers: [32f11fae8e1d]
	I0719 07:42:18.642497    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:42:18.664740    8572 logs.go:276] 1 containers: [747a798e619e]
	I0719 07:42:18.664836    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:42:18.680187    8572 logs.go:276] 2 containers: [59864643b4b0 455dff02ae0e]
	I0719 07:42:18.680272    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:42:18.692787    8572 logs.go:276] 1 containers: [7fd2650e21f2]
	I0719 07:42:18.692852    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:42:18.704233    8572 logs.go:276] 1 containers: [443134764c2a]
	I0719 07:42:18.704294    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:42:18.717855    8572 logs.go:276] 1 containers: [a437699fc8c1]
	I0719 07:42:18.717917    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:42:18.731786    8572 logs.go:276] 0 containers: []
	W0719 07:42:18.731799    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:42:18.731846    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:42:18.742374    8572 logs.go:276] 1 containers: [1b1d14b572c2]
	I0719 07:42:18.742386    8572 logs.go:123] Gathering logs for storage-provisioner [1b1d14b572c2] ...
	I0719 07:42:18.742392    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1d14b572c2"
	I0719 07:42:18.754353    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:42:18.754367    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:42:18.758533    8572 logs.go:123] Gathering logs for kube-apiserver [32f11fae8e1d] ...
	I0719 07:42:18.758542    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32f11fae8e1d"
	I0719 07:42:18.773485    8572 logs.go:123] Gathering logs for etcd [747a798e619e] ...
	I0719 07:42:18.773498    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 747a798e619e"
	I0719 07:42:18.788222    8572 logs.go:123] Gathering logs for kube-scheduler [7fd2650e21f2] ...
	I0719 07:42:18.788234    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd2650e21f2"
	I0719 07:42:18.803341    8572 logs.go:123] Gathering logs for kube-controller-manager [a437699fc8c1] ...
	I0719 07:42:18.803353    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a437699fc8c1"
	I0719 07:42:18.821379    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:42:18.821389    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:42:18.845222    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:42:18.845232    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:42:18.857057    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:42:18.857068    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 07:42:18.894488    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:42:18.894581    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:42:18.895899    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:42:18.895904    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:42:18.931709    8572 logs.go:123] Gathering logs for coredns [59864643b4b0] ...
	I0719 07:42:18.931721    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59864643b4b0"
	I0719 07:42:18.944391    8572 logs.go:123] Gathering logs for coredns [455dff02ae0e] ...
	I0719 07:42:18.944401    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 455dff02ae0e"
	I0719 07:42:18.957143    8572 logs.go:123] Gathering logs for kube-proxy [443134764c2a] ...
	I0719 07:42:18.957155    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 443134764c2a"
	I0719 07:42:18.969526    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:42:18.969535    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 07:42:18.969564    8572 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0719 07:42:18.969568    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:42:18.969572    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:42:18.969575    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:42:18.969578    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:42:28.972257    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:42:33.974913    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:42:33.975296    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:42:34.021205    8572 logs.go:276] 1 containers: [32f11fae8e1d]
	I0719 07:42:34.021332    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:42:34.041743    8572 logs.go:276] 1 containers: [747a798e619e]
	I0719 07:42:34.041833    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:42:34.055861    8572 logs.go:276] 2 containers: [59864643b4b0 455dff02ae0e]
	I0719 07:42:34.055934    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:42:34.067475    8572 logs.go:276] 1 containers: [7fd2650e21f2]
	I0719 07:42:34.067536    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:42:34.077560    8572 logs.go:276] 1 containers: [443134764c2a]
	I0719 07:42:34.077628    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:42:34.088550    8572 logs.go:276] 1 containers: [a437699fc8c1]
	I0719 07:42:34.088613    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:42:34.098977    8572 logs.go:276] 0 containers: []
	W0719 07:42:34.098988    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:42:34.099039    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:42:34.109171    8572 logs.go:276] 1 containers: [1b1d14b572c2]
	I0719 07:42:34.109191    8572 logs.go:123] Gathering logs for storage-provisioner [1b1d14b572c2] ...
	I0719 07:42:34.109197    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1d14b572c2"
	I0719 07:42:34.120677    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:42:34.120690    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 07:42:34.158428    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:42:34.158523    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:42:34.159773    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:42:34.159777    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:42:34.164067    8572 logs.go:123] Gathering logs for kube-apiserver [32f11fae8e1d] ...
	I0719 07:42:34.164072    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32f11fae8e1d"
	I0719 07:42:34.178555    8572 logs.go:123] Gathering logs for etcd [747a798e619e] ...
	I0719 07:42:34.178564    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 747a798e619e"
	I0719 07:42:34.197000    8572 logs.go:123] Gathering logs for coredns [59864643b4b0] ...
	I0719 07:42:34.197010    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59864643b4b0"
	I0719 07:42:34.208587    8572 logs.go:123] Gathering logs for coredns [455dff02ae0e] ...
	I0719 07:42:34.208598    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 455dff02ae0e"
	I0719 07:42:34.220334    8572 logs.go:123] Gathering logs for kube-scheduler [7fd2650e21f2] ...
	I0719 07:42:34.220344    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd2650e21f2"
	I0719 07:42:34.235029    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:42:34.235040    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:42:34.246439    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:42:34.246450    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:42:34.281092    8572 logs.go:123] Gathering logs for kube-proxy [443134764c2a] ...
	I0719 07:42:34.281103    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 443134764c2a"
	I0719 07:42:34.293436    8572 logs.go:123] Gathering logs for kube-controller-manager [a437699fc8c1] ...
	I0719 07:42:34.293450    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a437699fc8c1"
	I0719 07:42:34.310759    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:42:34.310770    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:42:34.335350    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:42:34.335360    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 07:42:34.335385    8572 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0719 07:42:34.335390    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:42:34.335394    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:42:34.335400    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:42:34.335402    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:42:44.339451    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:42:49.341723    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:42:49.341971    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:42:49.367689    8572 logs.go:276] 1 containers: [32f11fae8e1d]
	I0719 07:42:49.367801    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:42:49.384961    8572 logs.go:276] 1 containers: [747a798e619e]
	I0719 07:42:49.385025    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:42:49.398291    8572 logs.go:276] 2 containers: [59864643b4b0 455dff02ae0e]
	I0719 07:42:49.398360    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:42:49.409656    8572 logs.go:276] 1 containers: [7fd2650e21f2]
	I0719 07:42:49.409721    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:42:49.420218    8572 logs.go:276] 1 containers: [443134764c2a]
	I0719 07:42:49.420295    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:42:49.431637    8572 logs.go:276] 1 containers: [a437699fc8c1]
	I0719 07:42:49.431701    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:42:49.441976    8572 logs.go:276] 0 containers: []
	W0719 07:42:49.441985    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:42:49.442033    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:42:49.458162    8572 logs.go:276] 1 containers: [1b1d14b572c2]
	I0719 07:42:49.458175    8572 logs.go:123] Gathering logs for coredns [59864643b4b0] ...
	I0719 07:42:49.458180    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59864643b4b0"
	I0719 07:42:49.469724    8572 logs.go:123] Gathering logs for kube-proxy [443134764c2a] ...
	I0719 07:42:49.469736    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 443134764c2a"
	I0719 07:42:49.481479    8572 logs.go:123] Gathering logs for storage-provisioner [1b1d14b572c2] ...
	I0719 07:42:49.481491    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1d14b572c2"
	I0719 07:42:49.492673    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:42:49.492687    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:42:49.516176    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:42:49.516183    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 07:42:49.550936    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:42:49.551032    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:42:49.552287    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:42:49.552291    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:42:49.586839    8572 logs.go:123] Gathering logs for kube-apiserver [32f11fae8e1d] ...
	I0719 07:42:49.586848    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32f11fae8e1d"
	I0719 07:42:49.604391    8572 logs.go:123] Gathering logs for etcd [747a798e619e] ...
	I0719 07:42:49.604403    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 747a798e619e"
	I0719 07:42:49.618091    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:42:49.618104    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:42:49.629383    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:42:49.629392    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:42:49.633542    8572 logs.go:123] Gathering logs for coredns [455dff02ae0e] ...
	I0719 07:42:49.633548    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 455dff02ae0e"
	I0719 07:42:49.644944    8572 logs.go:123] Gathering logs for kube-scheduler [7fd2650e21f2] ...
	I0719 07:42:49.644955    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd2650e21f2"
	I0719 07:42:49.659198    8572 logs.go:123] Gathering logs for kube-controller-manager [a437699fc8c1] ...
	I0719 07:42:49.659209    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a437699fc8c1"
	I0719 07:42:49.677040    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:42:49.677051    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 07:42:49.677080    8572 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0719 07:42:49.677085    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:42:49.677088    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:42:49.677092    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:42:49.677094    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:42:59.681232    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:43:04.683625    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:43:04.684060    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:43:04.728633    8572 logs.go:276] 1 containers: [32f11fae8e1d]
	I0719 07:43:04.728790    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:43:04.749209    8572 logs.go:276] 1 containers: [747a798e619e]
	I0719 07:43:04.749308    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:43:04.764184    8572 logs.go:276] 2 containers: [59864643b4b0 455dff02ae0e]
	I0719 07:43:04.764259    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:43:04.776371    8572 logs.go:276] 1 containers: [7fd2650e21f2]
	I0719 07:43:04.776441    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:43:04.787239    8572 logs.go:276] 1 containers: [443134764c2a]
	I0719 07:43:04.787303    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:43:04.798299    8572 logs.go:276] 1 containers: [a437699fc8c1]
	I0719 07:43:04.798365    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:43:04.808924    8572 logs.go:276] 0 containers: []
	W0719 07:43:04.808936    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:43:04.808992    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:43:04.826412    8572 logs.go:276] 1 containers: [1b1d14b572c2]
	I0719 07:43:04.826430    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:43:04.826436    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:43:04.839435    8572 logs.go:123] Gathering logs for coredns [455dff02ae0e] ...
	I0719 07:43:04.839448    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 455dff02ae0e"
	I0719 07:43:04.851245    8572 logs.go:123] Gathering logs for kube-proxy [443134764c2a] ...
	I0719 07:43:04.851260    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 443134764c2a"
	I0719 07:43:04.863279    8572 logs.go:123] Gathering logs for kube-controller-manager [a437699fc8c1] ...
	I0719 07:43:04.863288    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a437699fc8c1"
	I0719 07:43:04.883889    8572 logs.go:123] Gathering logs for storage-provisioner [1b1d14b572c2] ...
	I0719 07:43:04.883902    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1d14b572c2"
	I0719 07:43:04.899299    8572 logs.go:123] Gathering logs for etcd [747a798e619e] ...
	I0719 07:43:04.899313    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 747a798e619e"
	I0719 07:43:04.913333    8572 logs.go:123] Gathering logs for coredns [59864643b4b0] ...
	I0719 07:43:04.913344    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59864643b4b0"
	I0719 07:43:04.925060    8572 logs.go:123] Gathering logs for kube-scheduler [7fd2650e21f2] ...
	I0719 07:43:04.925070    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd2650e21f2"
	I0719 07:43:04.940251    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:43:04.940263    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:43:04.968708    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:43:04.968719    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 07:43:05.004852    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:43:05.004945    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:43:05.006295    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:43:05.006304    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:43:05.010444    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:43:05.010452    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:43:05.044014    8572 logs.go:123] Gathering logs for kube-apiserver [32f11fae8e1d] ...
	I0719 07:43:05.044028    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32f11fae8e1d"
	I0719 07:43:05.058376    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:43:05.058387    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 07:43:05.058409    8572 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0719 07:43:05.058412    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:43:05.058415    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:43:05.058419    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:43:05.058421    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:43:15.062533    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:43:20.064892    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:43:20.065370    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:43:20.102726    8572 logs.go:276] 1 containers: [32f11fae8e1d]
	I0719 07:43:20.102844    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:43:20.123565    8572 logs.go:276] 1 containers: [747a798e619e]
	I0719 07:43:20.123672    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:43:20.138678    8572 logs.go:276] 4 containers: [99773d615fe0 e7580f1ffe97 59864643b4b0 455dff02ae0e]
	I0719 07:43:20.138748    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:43:20.153368    8572 logs.go:276] 1 containers: [7fd2650e21f2]
	I0719 07:43:20.153433    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:43:20.163939    8572 logs.go:276] 1 containers: [443134764c2a]
	I0719 07:43:20.164002    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:43:20.175004    8572 logs.go:276] 1 containers: [a437699fc8c1]
	I0719 07:43:20.175066    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:43:20.185157    8572 logs.go:276] 0 containers: []
	W0719 07:43:20.185169    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:43:20.185224    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:43:20.195437    8572 logs.go:276] 1 containers: [1b1d14b572c2]
	I0719 07:43:20.195453    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:43:20.195458    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:43:20.200293    8572 logs.go:123] Gathering logs for coredns [99773d615fe0] ...
	I0719 07:43:20.200302    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99773d615fe0"
	I0719 07:43:20.212057    8572 logs.go:123] Gathering logs for coredns [59864643b4b0] ...
	I0719 07:43:20.212066    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59864643b4b0"
	I0719 07:43:20.224263    8572 logs.go:123] Gathering logs for kube-proxy [443134764c2a] ...
	I0719 07:43:20.224273    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 443134764c2a"
	I0719 07:43:20.235836    8572 logs.go:123] Gathering logs for storage-provisioner [1b1d14b572c2] ...
	I0719 07:43:20.235850    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1d14b572c2"
	I0719 07:43:20.247318    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:43:20.247331    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:43:20.258883    8572 logs.go:123] Gathering logs for etcd [747a798e619e] ...
	I0719 07:43:20.258898    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 747a798e619e"
	I0719 07:43:20.273333    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:43:20.273342    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 07:43:20.309931    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:43:20.310026    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:43:20.311280    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:43:20.311284    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:43:20.345449    8572 logs.go:123] Gathering logs for kube-apiserver [32f11fae8e1d] ...
	I0719 07:43:20.345462    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32f11fae8e1d"
	I0719 07:43:20.360467    8572 logs.go:123] Gathering logs for kube-controller-manager [a437699fc8c1] ...
	I0719 07:43:20.360479    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a437699fc8c1"
	I0719 07:43:20.378252    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:43:20.378261    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:43:20.403740    8572 logs.go:123] Gathering logs for coredns [e7580f1ffe97] ...
	I0719 07:43:20.403748    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7580f1ffe97"
	I0719 07:43:20.415311    8572 logs.go:123] Gathering logs for coredns [455dff02ae0e] ...
	I0719 07:43:20.415322    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 455dff02ae0e"
	I0719 07:43:20.427090    8572 logs.go:123] Gathering logs for kube-scheduler [7fd2650e21f2] ...
	I0719 07:43:20.427099    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd2650e21f2"
	I0719 07:43:20.441559    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:43:20.441570    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 07:43:20.441591    8572 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0719 07:43:20.441595    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:43:20.441599    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:43:20.441603    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:43:20.441605    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:43:30.445632    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:43:35.447750    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:43:35.447933    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:43:35.462833    8572 logs.go:276] 1 containers: [32f11fae8e1d]
	I0719 07:43:35.462916    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:43:35.474392    8572 logs.go:276] 1 containers: [747a798e619e]
	I0719 07:43:35.474453    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:43:35.484842    8572 logs.go:276] 4 containers: [99773d615fe0 e7580f1ffe97 59864643b4b0 455dff02ae0e]
	I0719 07:43:35.484907    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:43:35.494938    8572 logs.go:276] 1 containers: [7fd2650e21f2]
	I0719 07:43:35.494996    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:43:35.505414    8572 logs.go:276] 1 containers: [443134764c2a]
	I0719 07:43:35.505480    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:43:35.515819    8572 logs.go:276] 1 containers: [a437699fc8c1]
	I0719 07:43:35.515887    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:43:35.526779    8572 logs.go:276] 0 containers: []
	W0719 07:43:35.526790    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:43:35.526838    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:43:35.537044    8572 logs.go:276] 1 containers: [1b1d14b572c2]
	I0719 07:43:35.537060    8572 logs.go:123] Gathering logs for kube-apiserver [32f11fae8e1d] ...
	I0719 07:43:35.537065    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32f11fae8e1d"
	I0719 07:43:35.551616    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:43:35.551628    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:43:35.563485    8572 logs.go:123] Gathering logs for coredns [59864643b4b0] ...
	I0719 07:43:35.563497    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59864643b4b0"
	I0719 07:43:35.574975    8572 logs.go:123] Gathering logs for kube-scheduler [7fd2650e21f2] ...
	I0719 07:43:35.574987    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd2650e21f2"
	I0719 07:43:35.589461    8572 logs.go:123] Gathering logs for coredns [99773d615fe0] ...
	I0719 07:43:35.589475    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99773d615fe0"
	I0719 07:43:35.604338    8572 logs.go:123] Gathering logs for coredns [e7580f1ffe97] ...
	I0719 07:43:35.604348    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7580f1ffe97"
	I0719 07:43:35.615724    8572 logs.go:123] Gathering logs for coredns [455dff02ae0e] ...
	I0719 07:43:35.615736    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 455dff02ae0e"
	I0719 07:43:35.627101    8572 logs.go:123] Gathering logs for kube-proxy [443134764c2a] ...
	I0719 07:43:35.627114    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 443134764c2a"
	I0719 07:43:35.639017    8572 logs.go:123] Gathering logs for storage-provisioner [1b1d14b572c2] ...
	I0719 07:43:35.639032    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1d14b572c2"
	I0719 07:43:35.650443    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:43:35.650458    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:43:35.655144    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:43:35.655153    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:43:35.690150    8572 logs.go:123] Gathering logs for kube-controller-manager [a437699fc8c1] ...
	I0719 07:43:35.690165    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a437699fc8c1"
	I0719 07:43:35.708104    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:43:35.708120    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:43:35.731954    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:43:35.731963    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 07:43:35.766677    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:43:35.766769    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:43:35.768029    8572 logs.go:123] Gathering logs for etcd [747a798e619e] ...
	I0719 07:43:35.768033    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 747a798e619e"
	I0719 07:43:35.785974    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:43:35.785982    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 07:43:35.786007    8572 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0719 07:43:35.786011    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:43:35.786014    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:43:35.786056    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:43:35.786060    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:43:45.789413    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:43:50.792121    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:43:50.792275    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:43:50.805743    8572 logs.go:276] 1 containers: [32f11fae8e1d]
	I0719 07:43:50.805816    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:43:50.818312    8572 logs.go:276] 1 containers: [747a798e619e]
	I0719 07:43:50.818375    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:43:50.830052    8572 logs.go:276] 4 containers: [99773d615fe0 e7580f1ffe97 59864643b4b0 455dff02ae0e]
	I0719 07:43:50.830121    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:43:50.840497    8572 logs.go:276] 1 containers: [7fd2650e21f2]
	I0719 07:43:50.840564    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:43:50.851467    8572 logs.go:276] 1 containers: [443134764c2a]
	I0719 07:43:50.851534    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:43:50.862422    8572 logs.go:276] 1 containers: [a437699fc8c1]
	I0719 07:43:50.862482    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:43:50.873122    8572 logs.go:276] 0 containers: []
	W0719 07:43:50.873133    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:43:50.873189    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:43:50.884085    8572 logs.go:276] 1 containers: [1b1d14b572c2]
	I0719 07:43:50.884100    8572 logs.go:123] Gathering logs for kube-controller-manager [a437699fc8c1] ...
	I0719 07:43:50.884106    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a437699fc8c1"
	I0719 07:43:50.909804    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:43:50.909818    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:43:50.944778    8572 logs.go:123] Gathering logs for kube-apiserver [32f11fae8e1d] ...
	I0719 07:43:50.944793    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32f11fae8e1d"
	I0719 07:43:50.960054    8572 logs.go:123] Gathering logs for coredns [99773d615fe0] ...
	I0719 07:43:50.960066    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99773d615fe0"
	I0719 07:43:50.973123    8572 logs.go:123] Gathering logs for kube-scheduler [7fd2650e21f2] ...
	I0719 07:43:50.973131    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd2650e21f2"
	I0719 07:43:50.991200    8572 logs.go:123] Gathering logs for kube-proxy [443134764c2a] ...
	I0719 07:43:50.991214    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 443134764c2a"
	I0719 07:43:51.002779    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:43:51.002791    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:43:51.007215    8572 logs.go:123] Gathering logs for etcd [747a798e619e] ...
	I0719 07:43:51.007224    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 747a798e619e"
	I0719 07:43:51.021963    8572 logs.go:123] Gathering logs for coredns [59864643b4b0] ...
	I0719 07:43:51.021972    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59864643b4b0"
	I0719 07:43:51.033353    8572 logs.go:123] Gathering logs for coredns [455dff02ae0e] ...
	I0719 07:43:51.033362    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 455dff02ae0e"
	I0719 07:43:51.044665    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:43:51.044677    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:43:51.056448    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:43:51.056459    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 07:43:51.090677    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:43:51.090770    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:43:51.092054    8572 logs.go:123] Gathering logs for coredns [e7580f1ffe97] ...
	I0719 07:43:51.092057    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7580f1ffe97"
	I0719 07:43:51.103022    8572 logs.go:123] Gathering logs for storage-provisioner [1b1d14b572c2] ...
	I0719 07:43:51.103032    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1d14b572c2"
	I0719 07:43:51.114552    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:43:51.114561    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:43:51.138062    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:43:51.138069    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 07:43:51.138097    8572 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0719 07:43:51.138100    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:43:51.138112    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:43:51.138126    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:43:51.138132    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:44:01.141383    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:44:06.142768    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:44:06.142848    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:44:06.155435    8572 logs.go:276] 1 containers: [32f11fae8e1d]
	I0719 07:44:06.155486    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:44:06.166617    8572 logs.go:276] 1 containers: [747a798e619e]
	I0719 07:44:06.166674    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:44:06.178076    8572 logs.go:276] 4 containers: [99773d615fe0 e7580f1ffe97 59864643b4b0 455dff02ae0e]
	I0719 07:44:06.178129    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:44:06.191081    8572 logs.go:276] 1 containers: [7fd2650e21f2]
	I0719 07:44:06.191135    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:44:06.201614    8572 logs.go:276] 1 containers: [443134764c2a]
	I0719 07:44:06.201668    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:44:06.213169    8572 logs.go:276] 1 containers: [a437699fc8c1]
	I0719 07:44:06.213232    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:44:06.224806    8572 logs.go:276] 0 containers: []
	W0719 07:44:06.224815    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:44:06.224872    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:44:06.236587    8572 logs.go:276] 1 containers: [1b1d14b572c2]
	I0719 07:44:06.236602    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:44:06.236608    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:44:06.241866    8572 logs.go:123] Gathering logs for kube-proxy [443134764c2a] ...
	I0719 07:44:06.241878    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 443134764c2a"
	I0719 07:44:06.258141    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:44:06.258153    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:44:06.270352    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:44:06.270364    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 07:44:06.307501    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:44:06.307599    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:44:06.308936    8572 logs.go:123] Gathering logs for kube-apiserver [32f11fae8e1d] ...
	I0719 07:44:06.308944    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32f11fae8e1d"
	I0719 07:44:06.324846    8572 logs.go:123] Gathering logs for coredns [99773d615fe0] ...
	I0719 07:44:06.324859    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99773d615fe0"
	I0719 07:44:06.337967    8572 logs.go:123] Gathering logs for coredns [455dff02ae0e] ...
	I0719 07:44:06.337975    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 455dff02ae0e"
	I0719 07:44:06.350222    8572 logs.go:123] Gathering logs for kube-scheduler [7fd2650e21f2] ...
	I0719 07:44:06.350233    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd2650e21f2"
	I0719 07:44:06.366898    8572 logs.go:123] Gathering logs for coredns [59864643b4b0] ...
	I0719 07:44:06.366910    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59864643b4b0"
	I0719 07:44:06.380611    8572 logs.go:123] Gathering logs for kube-controller-manager [a437699fc8c1] ...
	I0719 07:44:06.380619    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a437699fc8c1"
	I0719 07:44:06.398975    8572 logs.go:123] Gathering logs for storage-provisioner [1b1d14b572c2] ...
	I0719 07:44:06.398986    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1d14b572c2"
	I0719 07:44:06.411928    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:44:06.411944    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:44:06.438430    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:44:06.438452    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:44:06.483274    8572 logs.go:123] Gathering logs for etcd [747a798e619e] ...
	I0719 07:44:06.483285    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 747a798e619e"
	I0719 07:44:06.507485    8572 logs.go:123] Gathering logs for coredns [e7580f1ffe97] ...
	I0719 07:44:06.507493    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7580f1ffe97"
	I0719 07:44:06.521470    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:44:06.521483    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 07:44:06.521509    8572 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0719 07:44:06.521513    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:44:06.521517    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:44:06.521533    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:44:06.521536    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:44:16.525606    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:44:21.527862    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:44:21.528153    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:44:21.562124    8572 logs.go:276] 1 containers: [32f11fae8e1d]
	I0719 07:44:21.562278    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:44:21.582043    8572 logs.go:276] 1 containers: [747a798e619e]
	I0719 07:44:21.582121    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:44:21.599901    8572 logs.go:276] 4 containers: [99773d615fe0 e7580f1ffe97 59864643b4b0 455dff02ae0e]
	I0719 07:44:21.599977    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:44:21.612109    8572 logs.go:276] 1 containers: [7fd2650e21f2]
	I0719 07:44:21.612178    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:44:21.626616    8572 logs.go:276] 1 containers: [443134764c2a]
	I0719 07:44:21.626673    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:44:21.637441    8572 logs.go:276] 1 containers: [a437699fc8c1]
	I0719 07:44:21.637503    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:44:21.647701    8572 logs.go:276] 0 containers: []
	W0719 07:44:21.647710    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:44:21.647758    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:44:21.658109    8572 logs.go:276] 1 containers: [1b1d14b572c2]
	I0719 07:44:21.658125    8572 logs.go:123] Gathering logs for etcd [747a798e619e] ...
	I0719 07:44:21.658131    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 747a798e619e"
	I0719 07:44:21.672268    8572 logs.go:123] Gathering logs for coredns [99773d615fe0] ...
	I0719 07:44:21.672280    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99773d615fe0"
	I0719 07:44:21.686224    8572 logs.go:123] Gathering logs for coredns [455dff02ae0e] ...
	I0719 07:44:21.686235    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 455dff02ae0e"
	I0719 07:44:21.698432    8572 logs.go:123] Gathering logs for kube-scheduler [7fd2650e21f2] ...
	I0719 07:44:21.698442    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd2650e21f2"
	I0719 07:44:21.715075    8572 logs.go:123] Gathering logs for storage-provisioner [1b1d14b572c2] ...
	I0719 07:44:21.715087    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1d14b572c2"
	I0719 07:44:21.727184    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:44:21.727197    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:44:21.750511    8572 logs.go:123] Gathering logs for coredns [e7580f1ffe97] ...
	I0719 07:44:21.750520    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7580f1ffe97"
	I0719 07:44:21.766989    8572 logs.go:123] Gathering logs for kube-controller-manager [a437699fc8c1] ...
	I0719 07:44:21.767000    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a437699fc8c1"
	I0719 07:44:21.786336    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:44:21.786346    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:44:21.805945    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:44:21.805955    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 07:44:21.844342    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:44:21.844435    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:44:21.845686    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:44:21.845691    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:44:21.881010    8572 logs.go:123] Gathering logs for kube-apiserver [32f11fae8e1d] ...
	I0719 07:44:21.881022    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32f11fae8e1d"
	I0719 07:44:21.896020    8572 logs.go:123] Gathering logs for kube-proxy [443134764c2a] ...
	I0719 07:44:21.896029    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 443134764c2a"
	I0719 07:44:21.907674    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:44:21.907686    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:44:21.911980    8572 logs.go:123] Gathering logs for coredns [59864643b4b0] ...
	I0719 07:44:21.911989    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59864643b4b0"
	I0719 07:44:21.923427    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:44:21.923436    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 07:44:21.923463    8572 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0719 07:44:21.923468    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:44:21.923471    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:44:21.923482    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:44:21.923484    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:44:31.925714    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:44:36.928029    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:44:36.928257    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:44:36.951045    8572 logs.go:276] 1 containers: [32f11fae8e1d]
	I0719 07:44:36.951141    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:44:36.966482    8572 logs.go:276] 1 containers: [747a798e619e]
	I0719 07:44:36.966551    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:44:36.979725    8572 logs.go:276] 4 containers: [99773d615fe0 e7580f1ffe97 59864643b4b0 455dff02ae0e]
	I0719 07:44:36.979795    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:44:36.990597    8572 logs.go:276] 1 containers: [7fd2650e21f2]
	I0719 07:44:36.990661    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:44:37.001079    8572 logs.go:276] 1 containers: [443134764c2a]
	I0719 07:44:37.001144    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:44:37.011342    8572 logs.go:276] 1 containers: [a437699fc8c1]
	I0719 07:44:37.011401    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:44:37.021469    8572 logs.go:276] 0 containers: []
	W0719 07:44:37.021479    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:44:37.021528    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:44:37.031817    8572 logs.go:276] 1 containers: [1b1d14b572c2]
	I0719 07:44:37.031833    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:44:37.031838    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:44:37.043485    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:44:37.043499    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 07:44:37.080772    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:44:37.080864    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:44:37.082115    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:44:37.082119    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:44:37.086097    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:44:37.086106    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:44:37.121040    8572 logs.go:123] Gathering logs for kube-apiserver [32f11fae8e1d] ...
	I0719 07:44:37.121053    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32f11fae8e1d"
	I0719 07:44:37.135096    8572 logs.go:123] Gathering logs for coredns [99773d615fe0] ...
	I0719 07:44:37.135109    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99773d615fe0"
	I0719 07:44:37.147302    8572 logs.go:123] Gathering logs for kube-proxy [443134764c2a] ...
	I0719 07:44:37.147312    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 443134764c2a"
	I0719 07:44:37.162809    8572 logs.go:123] Gathering logs for coredns [455dff02ae0e] ...
	I0719 07:44:37.162822    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 455dff02ae0e"
	I0719 07:44:37.173998    8572 logs.go:123] Gathering logs for etcd [747a798e619e] ...
	I0719 07:44:37.174008    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 747a798e619e"
	I0719 07:44:37.187552    8572 logs.go:123] Gathering logs for kube-controller-manager [a437699fc8c1] ...
	I0719 07:44:37.187566    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a437699fc8c1"
	I0719 07:44:37.205617    8572 logs.go:123] Gathering logs for coredns [e7580f1ffe97] ...
	I0719 07:44:37.205627    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7580f1ffe97"
	I0719 07:44:37.217489    8572 logs.go:123] Gathering logs for coredns [59864643b4b0] ...
	I0719 07:44:37.217500    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59864643b4b0"
	I0719 07:44:37.231609    8572 logs.go:123] Gathering logs for kube-scheduler [7fd2650e21f2] ...
	I0719 07:44:37.231620    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd2650e21f2"
	I0719 07:44:37.246476    8572 logs.go:123] Gathering logs for storage-provisioner [1b1d14b572c2] ...
	I0719 07:44:37.246485    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1d14b572c2"
	I0719 07:44:37.258243    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:44:37.258257    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:44:37.281432    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:44:37.281441    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 07:44:37.281464    8572 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0719 07:44:37.281467    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:44:37.281471    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:44:37.281495    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:44:37.281499    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:44:47.285653    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:44:52.288126    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:44:52.288459    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0719 07:44:52.321613    8572 logs.go:276] 1 containers: [32f11fae8e1d]
	I0719 07:44:52.321731    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0719 07:44:52.340368    8572 logs.go:276] 1 containers: [747a798e619e]
	I0719 07:44:52.340447    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0719 07:44:52.354581    8572 logs.go:276] 4 containers: [99773d615fe0 e7580f1ffe97 59864643b4b0 455dff02ae0e]
	I0719 07:44:52.354653    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0719 07:44:52.366740    8572 logs.go:276] 1 containers: [7fd2650e21f2]
	I0719 07:44:52.366809    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0719 07:44:52.378339    8572 logs.go:276] 1 containers: [443134764c2a]
	I0719 07:44:52.378401    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0719 07:44:52.389396    8572 logs.go:276] 1 containers: [a437699fc8c1]
	I0719 07:44:52.389458    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0719 07:44:52.400168    8572 logs.go:276] 0 containers: []
	W0719 07:44:52.400179    8572 logs.go:278] No container was found matching "kindnet"
	I0719 07:44:52.400239    8572 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0719 07:44:52.410816    8572 logs.go:276] 1 containers: [1b1d14b572c2]
	I0719 07:44:52.410832    8572 logs.go:123] Gathering logs for coredns [e7580f1ffe97] ...
	I0719 07:44:52.410838    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7580f1ffe97"
	I0719 07:44:52.422960    8572 logs.go:123] Gathering logs for coredns [59864643b4b0] ...
	I0719 07:44:52.422972    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59864643b4b0"
	I0719 07:44:52.434753    8572 logs.go:123] Gathering logs for kube-scheduler [7fd2650e21f2] ...
	I0719 07:44:52.434766    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd2650e21f2"
	I0719 07:44:52.451096    8572 logs.go:123] Gathering logs for storage-provisioner [1b1d14b572c2] ...
	I0719 07:44:52.451109    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b1d14b572c2"
	I0719 07:44:52.462895    8572 logs.go:123] Gathering logs for Docker ...
	I0719 07:44:52.462908    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0719 07:44:52.488003    8572 logs.go:123] Gathering logs for kubelet ...
	I0719 07:44:52.488010    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 07:44:52.525478    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:44:52.525572    8572 logs.go:138] Found kubelet problem: Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:44:52.526824    8572 logs.go:123] Gathering logs for etcd [747a798e619e] ...
	I0719 07:44:52.526829    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 747a798e619e"
	I0719 07:44:52.540402    8572 logs.go:123] Gathering logs for coredns [455dff02ae0e] ...
	I0719 07:44:52.540414    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 455dff02ae0e"
	I0719 07:44:52.551948    8572 logs.go:123] Gathering logs for kube-apiserver [32f11fae8e1d] ...
	I0719 07:44:52.551961    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32f11fae8e1d"
	I0719 07:44:52.570831    8572 logs.go:123] Gathering logs for kube-controller-manager [a437699fc8c1] ...
	I0719 07:44:52.570840    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a437699fc8c1"
	I0719 07:44:52.589711    8572 logs.go:123] Gathering logs for container status ...
	I0719 07:44:52.589724    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 07:44:52.602282    8572 logs.go:123] Gathering logs for describe nodes ...
	I0719 07:44:52.602296    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 07:44:52.637239    8572 logs.go:123] Gathering logs for coredns [99773d615fe0] ...
	I0719 07:44:52.637253    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99773d615fe0"
	I0719 07:44:52.652549    8572 logs.go:123] Gathering logs for kube-proxy [443134764c2a] ...
	I0719 07:44:52.652562    8572 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 443134764c2a"
	I0719 07:44:52.665799    8572 logs.go:123] Gathering logs for dmesg ...
	I0719 07:44:52.665811    8572 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 07:44:52.670116    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:44:52.670127    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 07:44:52.670150    8572 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0719 07:44:52.670155    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: W0719 14:41:16.705582    9792 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	W0719 07:44:52.670158    8572 out.go:239]   Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	  Jul 19 14:41:16 stopped-upgrade-109000 kubelet[9792]: E0719 14:41:16.705631    9792 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-109000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-109000' and this object
	I0719 07:44:52.670165    8572 out.go:304] Setting ErrFile to fd 2...
	I0719 07:44:52.670168    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:02.674354    8572 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0719 07:45:07.677128    8572 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0719 07:45:07.680346    8572 out.go:177] 
	W0719 07:45:07.684179    8572 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0719 07:45:07.684200    8572 out.go:239] * 
	* 
	W0719 07:45:07.686094    8572 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:45:07.695123    8572 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-109000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (577.78s)

                                                
                                    
x
+
TestPause/serial/Start (9.82s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-227000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-227000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.753387167s)

                                                
                                                
-- stdout --
	* [pause-227000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-227000" primary control-plane node in "pause-227000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-227000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-227000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-227000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-227000 -n pause-227000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-227000 -n pause-227000: exit status 7 (63.784417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-227000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-854000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-854000 --driver=qemu2 : exit status 80 (9.718731542s)

                                                
                                                
-- stdout --
	* [NoKubernetes-854000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-854000" primary control-plane node in "NoKubernetes-854000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-854000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-854000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-854000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-854000 -n NoKubernetes-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-854000 -n NoKubernetes-854000: exit status 7 (30.339458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-854000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-854000 --no-kubernetes --driver=qemu2 : exit status 80 (5.242798334s)

                                                
                                                
-- stdout --
	* [NoKubernetes-854000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-854000
	* Restarting existing qemu2 VM for "NoKubernetes-854000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-854000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-854000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-854000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-854000 -n NoKubernetes-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-854000 -n NoKubernetes-854000: exit status 7 (47.870792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-854000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-854000 --no-kubernetes --driver=qemu2 : exit status 80 (5.24143275s)

                                                
                                                
-- stdout --
	* [NoKubernetes-854000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-854000
	* Restarting existing qemu2 VM for "NoKubernetes-854000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-854000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-854000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-854000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-854000 -n NoKubernetes-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-854000 -n NoKubernetes-854000: exit status 7 (48.460958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-854000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-854000 --driver=qemu2 : exit status 80 (5.271828333s)

                                                
                                                
-- stdout --
	* [NoKubernetes-854000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-854000
	* Restarting existing qemu2 VM for "NoKubernetes-854000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-854000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-854000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-854000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-854000 -n NoKubernetes-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-854000 -n NoKubernetes-854000: exit status 7 (49.705834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-854000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-047000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-047000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.80049275s)

                                                
                                                
-- stdout --
	* [auto-047000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-047000" primary control-plane node in "auto-047000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-047000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:43:07.154125    8779 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:43:07.154261    8779 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:43:07.154264    8779 out.go:304] Setting ErrFile to fd 2...
	I0719 07:43:07.154266    8779 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:43:07.154408    8779 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:43:07.155441    8779 out.go:298] Setting JSON to false
	I0719 07:43:07.172106    8779 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6156,"bootTime":1721394031,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:43:07.172171    8779 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:43:07.177011    8779 out.go:177] * [auto-047000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:43:07.185078    8779 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:43:07.185135    8779 notify.go:220] Checking for updates...
	I0719 07:43:07.191999    8779 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:43:07.195053    8779 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:43:07.198030    8779 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:43:07.201001    8779 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:43:07.203981    8779 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:43:07.207417    8779 config.go:182] Loaded profile config "multinode-023000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:43:07.207486    8779 config.go:182] Loaded profile config "stopped-upgrade-109000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:43:07.207528    8779 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:43:07.211964    8779 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:43:07.218974    8779 start.go:297] selected driver: qemu2
	I0719 07:43:07.218980    8779 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:43:07.218986    8779 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:43:07.221367    8779 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:43:07.223992    8779 out.go:177] * Automatically selected the socket_vmnet network
	I0719 07:43:07.227081    8779 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:43:07.227102    8779 cni.go:84] Creating CNI manager for ""
	I0719 07:43:07.227110    8779 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:43:07.227122    8779 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 07:43:07.227157    8779 start.go:340] cluster config:
	{Name:auto-047000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:43:07.230865    8779 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:43:07.237985    8779 out.go:177] * Starting "auto-047000" primary control-plane node in "auto-047000" cluster
	I0719 07:43:07.242033    8779 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:43:07.242046    8779 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:43:07.242063    8779 cache.go:56] Caching tarball of preloaded images
	I0719 07:43:07.242123    8779 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:43:07.242129    8779 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:43:07.242185    8779 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/auto-047000/config.json ...
	I0719 07:43:07.242197    8779 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/auto-047000/config.json: {Name:mk17f8a5d0c708dbbb9f5e71c8e4b148af43aaaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:43:07.242490    8779 start.go:360] acquireMachinesLock for auto-047000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:43:07.242527    8779 start.go:364] duration metric: took 31.084µs to acquireMachinesLock for "auto-047000"
	I0719 07:43:07.242537    8779 start.go:93] Provisioning new machine with config: &{Name:auto-047000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:43:07.242564    8779 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:43:07.251068    8779 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 07:43:07.268430    8779 start.go:159] libmachine.API.Create for "auto-047000" (driver="qemu2")
	I0719 07:43:07.268458    8779 client.go:168] LocalClient.Create starting
	I0719 07:43:07.268519    8779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:43:07.268550    8779 main.go:141] libmachine: Decoding PEM data...
	I0719 07:43:07.268559    8779 main.go:141] libmachine: Parsing certificate...
	I0719 07:43:07.268593    8779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:43:07.268615    8779 main.go:141] libmachine: Decoding PEM data...
	I0719 07:43:07.268624    8779 main.go:141] libmachine: Parsing certificate...
	I0719 07:43:07.268982    8779 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:43:07.394918    8779 main.go:141] libmachine: Creating SSH key...
	I0719 07:43:07.523857    8779 main.go:141] libmachine: Creating Disk image...
	I0719 07:43:07.523863    8779 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:43:07.524087    8779 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/auto-047000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/auto-047000/disk.qcow2
	I0719 07:43:07.533626    8779 main.go:141] libmachine: STDOUT: 
	I0719 07:43:07.533645    8779 main.go:141] libmachine: STDERR: 
	I0719 07:43:07.533700    8779 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/auto-047000/disk.qcow2 +20000M
	I0719 07:43:07.541613    8779 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:43:07.541634    8779 main.go:141] libmachine: STDERR: 
	I0719 07:43:07.541652    8779 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/auto-047000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/auto-047000/disk.qcow2
	I0719 07:43:07.541657    8779 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:43:07.541670    8779 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:43:07.541701    8779 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/auto-047000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/auto-047000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/auto-047000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:c6:a6:a9:bd:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/auto-047000/disk.qcow2
	I0719 07:43:07.543321    8779 main.go:141] libmachine: STDOUT: 
	I0719 07:43:07.543337    8779 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:43:07.543355    8779 client.go:171] duration metric: took 274.895625ms to LocalClient.Create
	I0719 07:43:09.545550    8779 start.go:128] duration metric: took 2.302974208s to createHost
	I0719 07:43:09.545615    8779 start.go:83] releasing machines lock for "auto-047000", held for 2.303100666s
	W0719 07:43:09.545716    8779 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:43:09.556994    8779 out.go:177] * Deleting "auto-047000" in qemu2 ...
	W0719 07:43:09.580505    8779 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:43:09.580537    8779 start.go:729] Will try again in 5 seconds ...
	I0719 07:43:14.582607    8779 start.go:360] acquireMachinesLock for auto-047000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:43:14.582903    8779 start.go:364] duration metric: took 251.292µs to acquireMachinesLock for "auto-047000"
	I0719 07:43:14.582964    8779 start.go:93] Provisioning new machine with config: &{Name:auto-047000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:43:14.583094    8779 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:43:14.586522    8779 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 07:43:14.620599    8779 start.go:159] libmachine.API.Create for "auto-047000" (driver="qemu2")
	I0719 07:43:14.620651    8779 client.go:168] LocalClient.Create starting
	I0719 07:43:14.620748    8779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:43:14.620800    8779 main.go:141] libmachine: Decoding PEM data...
	I0719 07:43:14.620817    8779 main.go:141] libmachine: Parsing certificate...
	I0719 07:43:14.620877    8779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:43:14.620916    8779 main.go:141] libmachine: Decoding PEM data...
	I0719 07:43:14.620927    8779 main.go:141] libmachine: Parsing certificate...
	I0719 07:43:14.621449    8779 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:43:14.756373    8779 main.go:141] libmachine: Creating SSH key...
	I0719 07:43:14.867698    8779 main.go:141] libmachine: Creating Disk image...
	I0719 07:43:14.867705    8779 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:43:14.867889    8779 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/auto-047000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/auto-047000/disk.qcow2
	I0719 07:43:14.877481    8779 main.go:141] libmachine: STDOUT: 
	I0719 07:43:14.877504    8779 main.go:141] libmachine: STDERR: 
	I0719 07:43:14.877579    8779 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/auto-047000/disk.qcow2 +20000M
	I0719 07:43:14.887119    8779 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:43:14.887148    8779 main.go:141] libmachine: STDERR: 
	I0719 07:43:14.887163    8779 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/auto-047000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/auto-047000/disk.qcow2
	I0719 07:43:14.887170    8779 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:43:14.887183    8779 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:43:14.887218    8779 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/auto-047000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/auto-047000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/auto-047000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:40:84:1b:f5:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/auto-047000/disk.qcow2
	I0719 07:43:14.889352    8779 main.go:141] libmachine: STDOUT: 
	I0719 07:43:14.889376    8779 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:43:14.889392    8779 client.go:171] duration metric: took 268.738083ms to LocalClient.Create
	I0719 07:43:16.891625    8779 start.go:128] duration metric: took 2.308473583s to createHost
	I0719 07:43:16.891672    8779 start.go:83] releasing machines lock for "auto-047000", held for 2.308777584s
	W0719 07:43:16.891897    8779 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-047000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-047000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:43:16.900175    8779 out.go:177] 
	W0719 07:43:16.905198    8779 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:43:16.905214    8779 out.go:239] * 
	* 
	W0719 07:43:16.906259    8779 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:43:16.921087    8779 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-047000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-047000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.770303083s)

                                                
                                                
-- stdout --
	* [flannel-047000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-047000" primary control-plane node in "flannel-047000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-047000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:43:19.111925    8888 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:43:19.112055    8888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:43:19.112058    8888 out.go:304] Setting ErrFile to fd 2...
	I0719 07:43:19.112065    8888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:43:19.112192    8888 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:43:19.113257    8888 out.go:298] Setting JSON to false
	I0719 07:43:19.129604    8888 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6168,"bootTime":1721394031,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:43:19.129721    8888 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:43:19.135504    8888 out.go:177] * [flannel-047000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:43:19.142440    8888 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:43:19.142509    8888 notify.go:220] Checking for updates...
	I0719 07:43:19.149492    8888 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:43:19.152489    8888 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:43:19.155463    8888 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:43:19.158500    8888 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:43:19.161425    8888 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:43:19.164867    8888 config.go:182] Loaded profile config "multinode-023000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:43:19.164940    8888 config.go:182] Loaded profile config "stopped-upgrade-109000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:43:19.164994    8888 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:43:19.169490    8888 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:43:19.176433    8888 start.go:297] selected driver: qemu2
	I0719 07:43:19.176439    8888 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:43:19.176444    8888 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:43:19.178754    8888 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:43:19.181495    8888 out.go:177] * Automatically selected the socket_vmnet network
	I0719 07:43:19.182901    8888 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:43:19.182917    8888 cni.go:84] Creating CNI manager for "flannel"
	I0719 07:43:19.182921    8888 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0719 07:43:19.182954    8888 start.go:340] cluster config:
	{Name:flannel-047000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:43:19.186579    8888 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:43:19.194449    8888 out.go:177] * Starting "flannel-047000" primary control-plane node in "flannel-047000" cluster
	I0719 07:43:19.198462    8888 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:43:19.198479    8888 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:43:19.198495    8888 cache.go:56] Caching tarball of preloaded images
	I0719 07:43:19.198564    8888 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:43:19.198571    8888 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:43:19.198629    8888 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/flannel-047000/config.json ...
	I0719 07:43:19.198642    8888 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/flannel-047000/config.json: {Name:mkbccb5e1bda20ccf17d318c90d9118a2062cb00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:43:19.198852    8888 start.go:360] acquireMachinesLock for flannel-047000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:43:19.198885    8888 start.go:364] duration metric: took 27.583µs to acquireMachinesLock for "flannel-047000"
	I0719 07:43:19.198898    8888 start.go:93] Provisioning new machine with config: &{Name:flannel-047000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:43:19.198933    8888 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:43:19.207389    8888 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 07:43:19.224606    8888 start.go:159] libmachine.API.Create for "flannel-047000" (driver="qemu2")
	I0719 07:43:19.224645    8888 client.go:168] LocalClient.Create starting
	I0719 07:43:19.224708    8888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:43:19.224740    8888 main.go:141] libmachine: Decoding PEM data...
	I0719 07:43:19.224749    8888 main.go:141] libmachine: Parsing certificate...
	I0719 07:43:19.224785    8888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:43:19.224808    8888 main.go:141] libmachine: Decoding PEM data...
	I0719 07:43:19.224820    8888 main.go:141] libmachine: Parsing certificate...
	I0719 07:43:19.225265    8888 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:43:19.351710    8888 main.go:141] libmachine: Creating SSH key...
	I0719 07:43:19.439345    8888 main.go:141] libmachine: Creating Disk image...
	I0719 07:43:19.439351    8888 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:43:19.439534    8888 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/flannel-047000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/flannel-047000/disk.qcow2
	I0719 07:43:19.448686    8888 main.go:141] libmachine: STDOUT: 
	I0719 07:43:19.448707    8888 main.go:141] libmachine: STDERR: 
	I0719 07:43:19.448762    8888 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/flannel-047000/disk.qcow2 +20000M
	I0719 07:43:19.456756    8888 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:43:19.456771    8888 main.go:141] libmachine: STDERR: 
	I0719 07:43:19.456797    8888 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/flannel-047000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/flannel-047000/disk.qcow2
	I0719 07:43:19.456800    8888 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:43:19.456817    8888 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:43:19.456843    8888 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/flannel-047000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/flannel-047000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/flannel-047000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:ee:19:cd:3d:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/flannel-047000/disk.qcow2
	I0719 07:43:19.458410    8888 main.go:141] libmachine: STDOUT: 
	I0719 07:43:19.458424    8888 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:43:19.458443    8888 client.go:171] duration metric: took 233.796292ms to LocalClient.Create
	I0719 07:43:21.460521    8888 start.go:128] duration metric: took 2.261597208s to createHost
	I0719 07:43:21.460575    8888 start.go:83] releasing machines lock for "flannel-047000", held for 2.261705625s
	W0719 07:43:21.460605    8888 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:43:21.470409    8888 out.go:177] * Deleting "flannel-047000" in qemu2 ...
	W0719 07:43:21.483032    8888 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:43:21.483042    8888 start.go:729] Will try again in 5 seconds ...
	I0719 07:43:26.485367    8888 start.go:360] acquireMachinesLock for flannel-047000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:43:26.486003    8888 start.go:364] duration metric: took 495.792µs to acquireMachinesLock for "flannel-047000"
	I0719 07:43:26.486134    8888 start.go:93] Provisioning new machine with config: &{Name:flannel-047000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:43:26.486425    8888 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:43:26.490937    8888 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 07:43:26.541056    8888 start.go:159] libmachine.API.Create for "flannel-047000" (driver="qemu2")
	I0719 07:43:26.541109    8888 client.go:168] LocalClient.Create starting
	I0719 07:43:26.541248    8888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:43:26.541324    8888 main.go:141] libmachine: Decoding PEM data...
	I0719 07:43:26.541343    8888 main.go:141] libmachine: Parsing certificate...
	I0719 07:43:26.541411    8888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:43:26.541456    8888 main.go:141] libmachine: Decoding PEM data...
	I0719 07:43:26.541471    8888 main.go:141] libmachine: Parsing certificate...
	I0719 07:43:26.541947    8888 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:43:26.680118    8888 main.go:141] libmachine: Creating SSH key...
	I0719 07:43:26.795194    8888 main.go:141] libmachine: Creating Disk image...
	I0719 07:43:26.795202    8888 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:43:26.795391    8888 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/flannel-047000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/flannel-047000/disk.qcow2
	I0719 07:43:26.804590    8888 main.go:141] libmachine: STDOUT: 
	I0719 07:43:26.804608    8888 main.go:141] libmachine: STDERR: 
	I0719 07:43:26.804660    8888 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/flannel-047000/disk.qcow2 +20000M
	I0719 07:43:26.812612    8888 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:43:26.812629    8888 main.go:141] libmachine: STDERR: 
	I0719 07:43:26.812641    8888 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/flannel-047000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/flannel-047000/disk.qcow2
	I0719 07:43:26.812646    8888 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:43:26.812659    8888 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:43:26.812704    8888 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/flannel-047000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/flannel-047000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/flannel-047000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:9b:2f:f6:54:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/flannel-047000/disk.qcow2
	I0719 07:43:26.814339    8888 main.go:141] libmachine: STDOUT: 
	I0719 07:43:26.814354    8888 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:43:26.814365    8888 client.go:171] duration metric: took 273.252041ms to LocalClient.Create
	I0719 07:43:28.816570    8888 start.go:128] duration metric: took 2.330105417s to createHost
	I0719 07:43:28.816689    8888 start.go:83] releasing machines lock for "flannel-047000", held for 2.33066475s
	W0719 07:43:28.817182    8888 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-047000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-047000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:43:28.826840    8888 out.go:177] 
	W0719 07:43:28.832015    8888 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:43:28.832055    8888 out.go:239] * 
	* 
	W0719 07:43:28.834579    8888 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:43:28.844939    8888 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-047000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-047000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.870130792s)

                                                
                                                
-- stdout --
	* [enable-default-cni-047000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-047000" primary control-plane node in "enable-default-cni-047000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-047000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:43:31.242327    9009 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:43:31.242469    9009 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:43:31.242472    9009 out.go:304] Setting ErrFile to fd 2...
	I0719 07:43:31.242474    9009 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:43:31.242611    9009 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:43:31.243684    9009 out.go:298] Setting JSON to false
	I0719 07:43:31.260076    9009 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6180,"bootTime":1721394031,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:43:31.260149    9009 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:43:31.263804    9009 out.go:177] * [enable-default-cni-047000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:43:31.270686    9009 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:43:31.270751    9009 notify.go:220] Checking for updates...
	I0719 07:43:31.277666    9009 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:43:31.280701    9009 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:43:31.283670    9009 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:43:31.286631    9009 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:43:31.289676    9009 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:43:31.293013    9009 config.go:182] Loaded profile config "multinode-023000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:43:31.293079    9009 config.go:182] Loaded profile config "stopped-upgrade-109000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:43:31.293133    9009 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:43:31.297636    9009 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:43:31.304682    9009 start.go:297] selected driver: qemu2
	I0719 07:43:31.304687    9009 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:43:31.304693    9009 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:43:31.306901    9009 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:43:31.309715    9009 out.go:177] * Automatically selected the socket_vmnet network
	E0719 07:43:31.312689    9009 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0719 07:43:31.312699    9009 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:43:31.312727    9009 cni.go:84] Creating CNI manager for "bridge"
	I0719 07:43:31.312731    9009 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 07:43:31.312760    9009 start.go:340] cluster config:
	{Name:enable-default-cni-047000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:43:31.316260    9009 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:43:31.323657    9009 out.go:177] * Starting "enable-default-cni-047000" primary control-plane node in "enable-default-cni-047000" cluster
	I0719 07:43:31.327740    9009 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:43:31.327754    9009 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:43:31.327766    9009 cache.go:56] Caching tarball of preloaded images
	I0719 07:43:31.327825    9009 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:43:31.327832    9009 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:43:31.327908    9009 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/enable-default-cni-047000/config.json ...
	I0719 07:43:31.327926    9009 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/enable-default-cni-047000/config.json: {Name:mkf69576d63797e87ced16bb01e4064b183bbbc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:43:31.328198    9009 start.go:360] acquireMachinesLock for enable-default-cni-047000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:43:31.328232    9009 start.go:364] duration metric: took 25.542µs to acquireMachinesLock for "enable-default-cni-047000"
	I0719 07:43:31.328243    9009 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-047000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:43:31.328276    9009 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:43:31.336677    9009 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 07:43:31.353564    9009 start.go:159] libmachine.API.Create for "enable-default-cni-047000" (driver="qemu2")
	I0719 07:43:31.353601    9009 client.go:168] LocalClient.Create starting
	I0719 07:43:31.353668    9009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:43:31.353705    9009 main.go:141] libmachine: Decoding PEM data...
	I0719 07:43:31.353715    9009 main.go:141] libmachine: Parsing certificate...
	I0719 07:43:31.353756    9009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:43:31.353782    9009 main.go:141] libmachine: Decoding PEM data...
	I0719 07:43:31.353798    9009 main.go:141] libmachine: Parsing certificate...
	I0719 07:43:31.354222    9009 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:43:31.481290    9009 main.go:141] libmachine: Creating SSH key...
	I0719 07:43:31.667750    9009 main.go:141] libmachine: Creating Disk image...
	I0719 07:43:31.667759    9009 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:43:31.667982    9009 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/enable-default-cni-047000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/enable-default-cni-047000/disk.qcow2
	I0719 07:43:31.678269    9009 main.go:141] libmachine: STDOUT: 
	I0719 07:43:31.678292    9009 main.go:141] libmachine: STDERR: 
	I0719 07:43:31.678360    9009 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/enable-default-cni-047000/disk.qcow2 +20000M
	I0719 07:43:31.686616    9009 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:43:31.686631    9009 main.go:141] libmachine: STDERR: 
	I0719 07:43:31.686644    9009 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/enable-default-cni-047000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/enable-default-cni-047000/disk.qcow2
	I0719 07:43:31.686649    9009 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:43:31.686663    9009 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:43:31.686688    9009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/enable-default-cni-047000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/enable-default-cni-047000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/enable-default-cni-047000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:0e:e3:7b:58:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/enable-default-cni-047000/disk.qcow2
	I0719 07:43:31.688320    9009 main.go:141] libmachine: STDOUT: 
	I0719 07:43:31.688336    9009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:43:31.688355    9009 client.go:171] duration metric: took 334.752875ms to LocalClient.Create
	I0719 07:43:33.690552    9009 start.go:128] duration metric: took 2.362271083s to createHost
	I0719 07:43:33.690628    9009 start.go:83] releasing machines lock for "enable-default-cni-047000", held for 2.36240925s
	W0719 07:43:33.690704    9009 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:43:33.702076    9009 out.go:177] * Deleting "enable-default-cni-047000" in qemu2 ...
	W0719 07:43:33.723555    9009 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:43:33.723623    9009 start.go:729] Will try again in 5 seconds ...
	I0719 07:43:38.725755    9009 start.go:360] acquireMachinesLock for enable-default-cni-047000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:43:38.725990    9009 start.go:364] duration metric: took 195.166µs to acquireMachinesLock for "enable-default-cni-047000"
	I0719 07:43:38.726068    9009 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-047000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:43:38.726153    9009 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:43:38.735926    9009 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 07:43:38.763037    9009 start.go:159] libmachine.API.Create for "enable-default-cni-047000" (driver="qemu2")
	I0719 07:43:38.763083    9009 client.go:168] LocalClient.Create starting
	I0719 07:43:38.763184    9009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:43:38.763225    9009 main.go:141] libmachine: Decoding PEM data...
	I0719 07:43:38.763239    9009 main.go:141] libmachine: Parsing certificate...
	I0719 07:43:38.763282    9009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:43:38.763312    9009 main.go:141] libmachine: Decoding PEM data...
	I0719 07:43:38.763325    9009 main.go:141] libmachine: Parsing certificate...
	I0719 07:43:38.763689    9009 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:43:38.892925    9009 main.go:141] libmachine: Creating SSH key...
	I0719 07:43:39.024870    9009 main.go:141] libmachine: Creating Disk image...
	I0719 07:43:39.024877    9009 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:43:39.025061    9009 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/enable-default-cni-047000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/enable-default-cni-047000/disk.qcow2
	I0719 07:43:39.034452    9009 main.go:141] libmachine: STDOUT: 
	I0719 07:43:39.034473    9009 main.go:141] libmachine: STDERR: 
	I0719 07:43:39.034525    9009 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/enable-default-cni-047000/disk.qcow2 +20000M
	I0719 07:43:39.042561    9009 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:43:39.042574    9009 main.go:141] libmachine: STDERR: 
	I0719 07:43:39.042592    9009 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/enable-default-cni-047000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/enable-default-cni-047000/disk.qcow2
	I0719 07:43:39.042597    9009 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:43:39.042607    9009 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:43:39.042641    9009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/enable-default-cni-047000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/enable-default-cni-047000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/enable-default-cni-047000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:bd:8e:32:47:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/enable-default-cni-047000/disk.qcow2
	I0719 07:43:39.044344    9009 main.go:141] libmachine: STDOUT: 
	I0719 07:43:39.044356    9009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:43:39.044376    9009 client.go:171] duration metric: took 281.291375ms to LocalClient.Create
	I0719 07:43:41.046546    9009 start.go:128] duration metric: took 2.320383209s to createHost
	I0719 07:43:41.046615    9009 start.go:83] releasing machines lock for "enable-default-cni-047000", held for 2.32063375s
	W0719 07:43:41.046953    9009 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-047000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-047000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:43:41.056486    9009 out.go:177] 
	W0719 07:43:41.059651    9009 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:43:41.059701    9009 out.go:239] * 
	* 
	W0719 07:43:41.061658    9009 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:43:41.070479    9009 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-047000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-047000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.768338708s)

                                                
                                                
-- stdout --
	* [bridge-047000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-047000" primary control-plane node in "bridge-047000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-047000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:43:43.271881    9122 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:43:43.272016    9122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:43:43.272020    9122 out.go:304] Setting ErrFile to fd 2...
	I0719 07:43:43.272023    9122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:43:43.272171    9122 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:43:43.273358    9122 out.go:298] Setting JSON to false
	I0719 07:43:43.290486    9122 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6192,"bootTime":1721394031,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:43:43.290555    9122 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:43:43.299129    9122 out.go:177] * [bridge-047000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:43:43.306150    9122 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:43:43.306194    9122 notify.go:220] Checking for updates...
	I0719 07:43:43.312170    9122 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:43:43.315030    9122 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:43:43.318133    9122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:43:43.321178    9122 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:43:43.322419    9122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:43:43.325432    9122 config.go:182] Loaded profile config "multinode-023000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:43:43.325499    9122 config.go:182] Loaded profile config "stopped-upgrade-109000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:43:43.325546    9122 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:43:43.330130    9122 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:43:43.335153    9122 start.go:297] selected driver: qemu2
	I0719 07:43:43.335159    9122 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:43:43.335167    9122 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:43:43.337262    9122 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:43:43.340236    9122 out.go:177] * Automatically selected the socket_vmnet network
	I0719 07:43:43.343212    9122 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:43:43.343225    9122 cni.go:84] Creating CNI manager for "bridge"
	I0719 07:43:43.343229    9122 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 07:43:43.343256    9122 start.go:340] cluster config:
	{Name:bridge-047000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:43:43.346700    9122 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:43:43.354194    9122 out.go:177] * Starting "bridge-047000" primary control-plane node in "bridge-047000" cluster
	I0719 07:43:43.358026    9122 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:43:43.358042    9122 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:43:43.358053    9122 cache.go:56] Caching tarball of preloaded images
	I0719 07:43:43.358100    9122 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:43:43.358105    9122 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:43:43.358151    9122 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/bridge-047000/config.json ...
	I0719 07:43:43.358162    9122 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/bridge-047000/config.json: {Name:mke7b697c3785e3ceae5a8b1ab9c35ad93dbd0e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:43:43.358437    9122 start.go:360] acquireMachinesLock for bridge-047000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:43:43.358467    9122 start.go:364] duration metric: took 24.625µs to acquireMachinesLock for "bridge-047000"
	I0719 07:43:43.358476    9122 start.go:93] Provisioning new machine with config: &{Name:bridge-047000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:43:43.358502    9122 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:43:43.363177    9122 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 07:43:43.378315    9122 start.go:159] libmachine.API.Create for "bridge-047000" (driver="qemu2")
	I0719 07:43:43.378342    9122 client.go:168] LocalClient.Create starting
	I0719 07:43:43.378401    9122 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:43:43.378431    9122 main.go:141] libmachine: Decoding PEM data...
	I0719 07:43:43.378439    9122 main.go:141] libmachine: Parsing certificate...
	I0719 07:43:43.378474    9122 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:43:43.378497    9122 main.go:141] libmachine: Decoding PEM data...
	I0719 07:43:43.378506    9122 main.go:141] libmachine: Parsing certificate...
	I0719 07:43:43.378847    9122 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:43:43.505253    9122 main.go:141] libmachine: Creating SSH key...
	I0719 07:43:43.554555    9122 main.go:141] libmachine: Creating Disk image...
	I0719 07:43:43.554565    9122 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:43:43.554753    9122 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/bridge-047000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/bridge-047000/disk.qcow2
	I0719 07:43:43.563922    9122 main.go:141] libmachine: STDOUT: 
	I0719 07:43:43.563939    9122 main.go:141] libmachine: STDERR: 
	I0719 07:43:43.563986    9122 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/bridge-047000/disk.qcow2 +20000M
	I0719 07:43:43.571909    9122 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:43:43.571922    9122 main.go:141] libmachine: STDERR: 
	I0719 07:43:43.571933    9122 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/bridge-047000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/bridge-047000/disk.qcow2
	I0719 07:43:43.571937    9122 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:43:43.571949    9122 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:43:43.571975    9122 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/bridge-047000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/bridge-047000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/bridge-047000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:bd:31:c7:6a:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/bridge-047000/disk.qcow2
	I0719 07:43:43.573526    9122 main.go:141] libmachine: STDOUT: 
	I0719 07:43:43.573554    9122 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:43:43.573575    9122 client.go:171] duration metric: took 195.231167ms to LocalClient.Create
	I0719 07:43:45.575746    9122 start.go:128] duration metric: took 2.217239834s to createHost
	I0719 07:43:45.575815    9122 start.go:83] releasing machines lock for "bridge-047000", held for 2.217358916s
	W0719 07:43:45.575903    9122 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:43:45.587608    9122 out.go:177] * Deleting "bridge-047000" in qemu2 ...
	W0719 07:43:45.607541    9122 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:43:45.607567    9122 start.go:729] Will try again in 5 seconds ...
	I0719 07:43:50.609781    9122 start.go:360] acquireMachinesLock for bridge-047000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:43:50.610370    9122 start.go:364] duration metric: took 475.5µs to acquireMachinesLock for "bridge-047000"
	I0719 07:43:50.610507    9122 start.go:93] Provisioning new machine with config: &{Name:bridge-047000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:43:50.610780    9122 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:43:50.619414    9122 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 07:43:50.668784    9122 start.go:159] libmachine.API.Create for "bridge-047000" (driver="qemu2")
	I0719 07:43:50.668826    9122 client.go:168] LocalClient.Create starting
	I0719 07:43:50.668966    9122 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:43:50.669040    9122 main.go:141] libmachine: Decoding PEM data...
	I0719 07:43:50.669060    9122 main.go:141] libmachine: Parsing certificate...
	I0719 07:43:50.669122    9122 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:43:50.669167    9122 main.go:141] libmachine: Decoding PEM data...
	I0719 07:43:50.669178    9122 main.go:141] libmachine: Parsing certificate...
	I0719 07:43:50.669801    9122 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:43:50.808123    9122 main.go:141] libmachine: Creating SSH key...
	I0719 07:43:50.949373    9122 main.go:141] libmachine: Creating Disk image...
	I0719 07:43:50.949386    9122 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:43:50.949617    9122 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/bridge-047000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/bridge-047000/disk.qcow2
	I0719 07:43:50.960208    9122 main.go:141] libmachine: STDOUT: 
	I0719 07:43:50.960235    9122 main.go:141] libmachine: STDERR: 
	I0719 07:43:50.960305    9122 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/bridge-047000/disk.qcow2 +20000M
	I0719 07:43:50.969618    9122 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:43:50.969642    9122 main.go:141] libmachine: STDERR: 
	I0719 07:43:50.969663    9122 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/bridge-047000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/bridge-047000/disk.qcow2
	I0719 07:43:50.969668    9122 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:43:50.969679    9122 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:43:50.969722    9122 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/bridge-047000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/bridge-047000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/bridge-047000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:66:06:54:a0:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/bridge-047000/disk.qcow2
	I0719 07:43:50.971807    9122 main.go:141] libmachine: STDOUT: 
	I0719 07:43:50.971826    9122 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:43:50.971867    9122 client.go:171] duration metric: took 303.039166ms to LocalClient.Create
	I0719 07:43:52.974044    9122 start.go:128] duration metric: took 2.363235542s to createHost
	I0719 07:43:52.974116    9122 start.go:83] releasing machines lock for "bridge-047000", held for 2.363746625s
	W0719 07:43:52.974435    9122 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-047000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-047000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:43:52.983072    9122 out.go:177] 
	W0719 07:43:52.987168    9122 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:43:52.987198    9122 out.go:239] * 
	* 
	W0719 07:43:52.989696    9122 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:43:52.998072    9122 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-047000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-047000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.830724042s)

                                                
                                                
-- stdout --
	* [kubenet-047000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-047000" primary control-plane node in "kubenet-047000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-047000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:43:55.195702    9231 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:43:55.195823    9231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:43:55.195826    9231 out.go:304] Setting ErrFile to fd 2...
	I0719 07:43:55.195829    9231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:43:55.195959    9231 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:43:55.197057    9231 out.go:298] Setting JSON to false
	I0719 07:43:55.213320    9231 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6204,"bootTime":1721394031,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:43:55.213400    9231 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:43:55.218173    9231 out.go:177] * [kubenet-047000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:43:55.226064    9231 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:43:55.226139    9231 notify.go:220] Checking for updates...
	I0719 07:43:55.231519    9231 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:43:55.234129    9231 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:43:55.237068    9231 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:43:55.240119    9231 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:43:55.243040    9231 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:43:55.246447    9231 config.go:182] Loaded profile config "multinode-023000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:43:55.246516    9231 config.go:182] Loaded profile config "stopped-upgrade-109000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:43:55.246564    9231 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:43:55.251080    9231 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:43:55.258069    9231 start.go:297] selected driver: qemu2
	I0719 07:43:55.258075    9231 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:43:55.258084    9231 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:43:55.260346    9231 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:43:55.263147    9231 out.go:177] * Automatically selected the socket_vmnet network
	I0719 07:43:55.268530    9231 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:43:55.268558    9231 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0719 07:43:55.268581    9231 start.go:340] cluster config:
	{Name:kubenet-047000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:43:55.272513    9231 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:43:55.279970    9231 out.go:177] * Starting "kubenet-047000" primary control-plane node in "kubenet-047000" cluster
	I0719 07:43:55.284078    9231 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:43:55.284094    9231 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:43:55.284109    9231 cache.go:56] Caching tarball of preloaded images
	I0719 07:43:55.284204    9231 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:43:55.284211    9231 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:43:55.284281    9231 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/kubenet-047000/config.json ...
	I0719 07:43:55.284293    9231 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/kubenet-047000/config.json: {Name:mk8d52f0287402f5118dbeb652e541a18e65c7b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:43:55.284580    9231 start.go:360] acquireMachinesLock for kubenet-047000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:43:55.284613    9231 start.go:364] duration metric: took 28.292µs to acquireMachinesLock for "kubenet-047000"
	I0719 07:43:55.284625    9231 start.go:93] Provisioning new machine with config: &{Name:kubenet-047000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:43:55.284658    9231 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:43:55.292074    9231 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 07:43:55.309287    9231 start.go:159] libmachine.API.Create for "kubenet-047000" (driver="qemu2")
	I0719 07:43:55.309319    9231 client.go:168] LocalClient.Create starting
	I0719 07:43:55.309381    9231 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:43:55.309410    9231 main.go:141] libmachine: Decoding PEM data...
	I0719 07:43:55.309419    9231 main.go:141] libmachine: Parsing certificate...
	I0719 07:43:55.309455    9231 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:43:55.309477    9231 main.go:141] libmachine: Decoding PEM data...
	I0719 07:43:55.309485    9231 main.go:141] libmachine: Parsing certificate...
	I0719 07:43:55.309881    9231 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:43:55.437191    9231 main.go:141] libmachine: Creating SSH key...
	I0719 07:43:55.512236    9231 main.go:141] libmachine: Creating Disk image...
	I0719 07:43:55.512242    9231 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:43:55.512425    9231 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubenet-047000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubenet-047000/disk.qcow2
	I0719 07:43:55.521695    9231 main.go:141] libmachine: STDOUT: 
	I0719 07:43:55.521718    9231 main.go:141] libmachine: STDERR: 
	I0719 07:43:55.521769    9231 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubenet-047000/disk.qcow2 +20000M
	I0719 07:43:55.529689    9231 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:43:55.529703    9231 main.go:141] libmachine: STDERR: 
	I0719 07:43:55.529720    9231 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubenet-047000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubenet-047000/disk.qcow2
	I0719 07:43:55.529725    9231 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:43:55.529735    9231 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:43:55.529766    9231 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubenet-047000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubenet-047000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubenet-047000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:81:43:7b:08:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubenet-047000/disk.qcow2
	I0719 07:43:55.531329    9231 main.go:141] libmachine: STDOUT: 
	I0719 07:43:55.531341    9231 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:43:55.531366    9231 client.go:171] duration metric: took 222.04575ms to LocalClient.Create
	I0719 07:43:57.533568    9231 start.go:128] duration metric: took 2.248895625s to createHost
	I0719 07:43:57.533650    9231 start.go:83] releasing machines lock for "kubenet-047000", held for 2.249048291s
	W0719 07:43:57.533752    9231 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:43:57.544080    9231 out.go:177] * Deleting "kubenet-047000" in qemu2 ...
	W0719 07:43:57.566341    9231 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:43:57.566374    9231 start.go:729] Will try again in 5 seconds ...
	I0719 07:44:02.568596    9231 start.go:360] acquireMachinesLock for kubenet-047000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:44:02.569253    9231 start.go:364] duration metric: took 543.709µs to acquireMachinesLock for "kubenet-047000"
	I0719 07:44:02.569395    9231 start.go:93] Provisioning new machine with config: &{Name:kubenet-047000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:44:02.569631    9231 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:44:02.579252    9231 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 07:44:02.632089    9231 start.go:159] libmachine.API.Create for "kubenet-047000" (driver="qemu2")
	I0719 07:44:02.632159    9231 client.go:168] LocalClient.Create starting
	I0719 07:44:02.632282    9231 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:44:02.632345    9231 main.go:141] libmachine: Decoding PEM data...
	I0719 07:44:02.632362    9231 main.go:141] libmachine: Parsing certificate...
	I0719 07:44:02.632431    9231 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:44:02.632476    9231 main.go:141] libmachine: Decoding PEM data...
	I0719 07:44:02.632490    9231 main.go:141] libmachine: Parsing certificate...
	I0719 07:44:02.633052    9231 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:44:02.772497    9231 main.go:141] libmachine: Creating SSH key...
	I0719 07:44:02.940980    9231 main.go:141] libmachine: Creating Disk image...
	I0719 07:44:02.940989    9231 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:44:02.941201    9231 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubenet-047000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubenet-047000/disk.qcow2
	I0719 07:44:02.950868    9231 main.go:141] libmachine: STDOUT: 
	I0719 07:44:02.950888    9231 main.go:141] libmachine: STDERR: 
	I0719 07:44:02.950950    9231 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubenet-047000/disk.qcow2 +20000M
	I0719 07:44:02.959435    9231 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:44:02.959453    9231 main.go:141] libmachine: STDERR: 
	I0719 07:44:02.959465    9231 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubenet-047000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubenet-047000/disk.qcow2
	I0719 07:44:02.959470    9231 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:44:02.959476    9231 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:44:02.959508    9231 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubenet-047000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubenet-047000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubenet-047000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:bf:b1:cb:33:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kubenet-047000/disk.qcow2
	I0719 07:44:02.961192    9231 main.go:141] libmachine: STDOUT: 
	I0719 07:44:02.961215    9231 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:44:02.961228    9231 client.go:171] duration metric: took 329.065208ms to LocalClient.Create
	I0719 07:44:04.963328    9231 start.go:128] duration metric: took 2.393697791s to createHost
	I0719 07:44:04.963357    9231 start.go:83] releasing machines lock for "kubenet-047000", held for 2.39410225s
	W0719 07:44:04.963565    9231 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-047000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-047000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:44:04.972218    9231 out.go:177] 
	W0719 07:44:04.977257    9231 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:44:04.977277    9231 out.go:239] * 
	* 
	W0719 07:44:04.978701    9231 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:44:04.989227    9231 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-047000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-047000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.825966625s)

                                                
                                                
-- stdout --
	* [kindnet-047000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-047000" primary control-plane node in "kindnet-047000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-047000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:44:07.126492    9340 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:44:07.126632    9340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:44:07.126636    9340 out.go:304] Setting ErrFile to fd 2...
	I0719 07:44:07.126638    9340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:44:07.126768    9340 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:44:07.127950    9340 out.go:298] Setting JSON to false
	I0719 07:44:07.144469    9340 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6216,"bootTime":1721394031,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:44:07.144539    9340 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:44:07.150116    9340 out.go:177] * [kindnet-047000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:44:07.156124    9340 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:44:07.156185    9340 notify.go:220] Checking for updates...
	I0719 07:44:07.163081    9340 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:44:07.166062    9340 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:44:07.169143    9340 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:44:07.172089    9340 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:44:07.175050    9340 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:44:07.178484    9340 config.go:182] Loaded profile config "multinode-023000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:44:07.178551    9340 config.go:182] Loaded profile config "stopped-upgrade-109000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:44:07.178600    9340 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:44:07.181938    9340 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:44:07.189103    9340 start.go:297] selected driver: qemu2
	I0719 07:44:07.189110    9340 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:44:07.189117    9340 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:44:07.191366    9340 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:44:07.192551    9340 out.go:177] * Automatically selected the socket_vmnet network
	I0719 07:44:07.195210    9340 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:44:07.195238    9340 cni.go:84] Creating CNI manager for "kindnet"
	I0719 07:44:07.195245    9340 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 07:44:07.195270    9340 start.go:340] cluster config:
	{Name:kindnet-047000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:44:07.198884    9340 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:44:07.206078    9340 out.go:177] * Starting "kindnet-047000" primary control-plane node in "kindnet-047000" cluster
	I0719 07:44:07.210082    9340 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:44:07.210095    9340 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:44:07.210105    9340 cache.go:56] Caching tarball of preloaded images
	I0719 07:44:07.210162    9340 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:44:07.210167    9340 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:44:07.210223    9340 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/kindnet-047000/config.json ...
	I0719 07:44:07.210236    9340 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/kindnet-047000/config.json: {Name:mk9e9f3752e3a1a336092d0f630a484a695b5afd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:44:07.210453    9340 start.go:360] acquireMachinesLock for kindnet-047000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:44:07.210487    9340 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "kindnet-047000"
	I0719 07:44:07.210497    9340 start.go:93] Provisioning new machine with config: &{Name:kindnet-047000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:44:07.210539    9340 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:44:07.219052    9340 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 07:44:07.235452    9340 start.go:159] libmachine.API.Create for "kindnet-047000" (driver="qemu2")
	I0719 07:44:07.235483    9340 client.go:168] LocalClient.Create starting
	I0719 07:44:07.235544    9340 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:44:07.235576    9340 main.go:141] libmachine: Decoding PEM data...
	I0719 07:44:07.235588    9340 main.go:141] libmachine: Parsing certificate...
	I0719 07:44:07.235623    9340 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:44:07.235645    9340 main.go:141] libmachine: Decoding PEM data...
	I0719 07:44:07.235656    9340 main.go:141] libmachine: Parsing certificate...
	I0719 07:44:07.235971    9340 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:44:07.364274    9340 main.go:141] libmachine: Creating SSH key...
	I0719 07:44:07.580942    9340 main.go:141] libmachine: Creating Disk image...
	I0719 07:44:07.580955    9340 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:44:07.581201    9340 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kindnet-047000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kindnet-047000/disk.qcow2
	I0719 07:44:07.591180    9340 main.go:141] libmachine: STDOUT: 
	I0719 07:44:07.591203    9340 main.go:141] libmachine: STDERR: 
	I0719 07:44:07.591253    9340 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kindnet-047000/disk.qcow2 +20000M
	I0719 07:44:07.599604    9340 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:44:07.599618    9340 main.go:141] libmachine: STDERR: 
	I0719 07:44:07.599634    9340 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kindnet-047000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kindnet-047000/disk.qcow2
	I0719 07:44:07.599641    9340 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:44:07.599652    9340 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:44:07.599683    9340 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kindnet-047000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kindnet-047000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kindnet-047000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:12:7a:10:36:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kindnet-047000/disk.qcow2
	I0719 07:44:07.601357    9340 main.go:141] libmachine: STDOUT: 
	I0719 07:44:07.601374    9340 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:44:07.601393    9340 client.go:171] duration metric: took 365.910375ms to LocalClient.Create
	I0719 07:44:09.603590    9340 start.go:128] duration metric: took 2.393041667s to createHost
	I0719 07:44:09.603673    9340 start.go:83] releasing machines lock for "kindnet-047000", held for 2.393198291s
	W0719 07:44:09.603759    9340 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:44:09.610822    9340 out.go:177] * Deleting "kindnet-047000" in qemu2 ...
	W0719 07:44:09.633776    9340 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:44:09.633810    9340 start.go:729] Will try again in 5 seconds ...
	I0719 07:44:14.636025    9340 start.go:360] acquireMachinesLock for kindnet-047000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:44:14.636650    9340 start.go:364] duration metric: took 499.042µs to acquireMachinesLock for "kindnet-047000"
	I0719 07:44:14.636793    9340 start.go:93] Provisioning new machine with config: &{Name:kindnet-047000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:44:14.637129    9340 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:44:14.645860    9340 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 07:44:14.690475    9340 start.go:159] libmachine.API.Create for "kindnet-047000" (driver="qemu2")
	I0719 07:44:14.690524    9340 client.go:168] LocalClient.Create starting
	I0719 07:44:14.690641    9340 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:44:14.690715    9340 main.go:141] libmachine: Decoding PEM data...
	I0719 07:44:14.690731    9340 main.go:141] libmachine: Parsing certificate...
	I0719 07:44:14.690825    9340 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:44:14.690870    9340 main.go:141] libmachine: Decoding PEM data...
	I0719 07:44:14.690892    9340 main.go:141] libmachine: Parsing certificate...
	I0719 07:44:14.691491    9340 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:44:14.829240    9340 main.go:141] libmachine: Creating SSH key...
	I0719 07:44:14.869315    9340 main.go:141] libmachine: Creating Disk image...
	I0719 07:44:14.869320    9340 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:44:14.869502    9340 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kindnet-047000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kindnet-047000/disk.qcow2
	I0719 07:44:14.878842    9340 main.go:141] libmachine: STDOUT: 
	I0719 07:44:14.878859    9340 main.go:141] libmachine: STDERR: 
	I0719 07:44:14.878911    9340 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kindnet-047000/disk.qcow2 +20000M
	I0719 07:44:14.886904    9340 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:44:14.886918    9340 main.go:141] libmachine: STDERR: 
	I0719 07:44:14.886937    9340 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kindnet-047000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kindnet-047000/disk.qcow2
	I0719 07:44:14.886942    9340 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:44:14.886950    9340 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:44:14.886975    9340 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kindnet-047000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kindnet-047000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kindnet-047000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:c8:10:72:ad:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/kindnet-047000/disk.qcow2
	I0719 07:44:14.888635    9340 main.go:141] libmachine: STDOUT: 
	I0719 07:44:14.888651    9340 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:44:14.888664    9340 client.go:171] duration metric: took 198.136917ms to LocalClient.Create
	I0719 07:44:16.890811    9340 start.go:128] duration metric: took 2.253595958s to createHost
	I0719 07:44:16.890855    9340 start.go:83] releasing machines lock for "kindnet-047000", held for 2.254196959s
	W0719 07:44:16.891024    9340 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-047000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-047000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:44:16.898423    9340 out.go:177] 
	W0719 07:44:16.903521    9340 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:44:16.903536    9340 out.go:239] * 
	* 
	W0719 07:44:16.904455    9340 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:44:16.915431    9340 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-047000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-047000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.801001959s)

                                                
                                                
-- stdout --
	* [calico-047000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-047000" primary control-plane node in "calico-047000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-047000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:44:19.129098    9453 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:44:19.129233    9453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:44:19.129237    9453 out.go:304] Setting ErrFile to fd 2...
	I0719 07:44:19.129239    9453 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:44:19.129363    9453 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:44:19.130363    9453 out.go:298] Setting JSON to false
	I0719 07:44:19.146914    9453 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6228,"bootTime":1721394031,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:44:19.146994    9453 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:44:19.152378    9453 out.go:177] * [calico-047000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:44:19.160503    9453 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:44:19.160556    9453 notify.go:220] Checking for updates...
	I0719 07:44:19.167450    9453 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:44:19.170495    9453 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:44:19.173547    9453 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:44:19.176461    9453 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:44:19.179533    9453 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:44:19.182687    9453 config.go:182] Loaded profile config "multinode-023000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:44:19.182748    9453 config.go:182] Loaded profile config "stopped-upgrade-109000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:44:19.182797    9453 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:44:19.186471    9453 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:44:19.193392    9453 start.go:297] selected driver: qemu2
	I0719 07:44:19.193399    9453 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:44:19.193407    9453 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:44:19.195565    9453 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:44:19.198427    9453 out.go:177] * Automatically selected the socket_vmnet network
	I0719 07:44:19.201594    9453 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:44:19.201617    9453 cni.go:84] Creating CNI manager for "calico"
	I0719 07:44:19.201621    9453 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0719 07:44:19.201656    9453 start.go:340] cluster config:
	{Name:calico-047000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:44:19.205215    9453 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:44:19.212464    9453 out.go:177] * Starting "calico-047000" primary control-plane node in "calico-047000" cluster
	I0719 07:44:19.216242    9453 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:44:19.216254    9453 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:44:19.216262    9453 cache.go:56] Caching tarball of preloaded images
	I0719 07:44:19.216306    9453 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:44:19.216311    9453 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:44:19.216354    9453 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/calico-047000/config.json ...
	I0719 07:44:19.216366    9453 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/calico-047000/config.json: {Name:mk5084dade98299fdbfe0acc39f557b85b1b8001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:44:19.216661    9453 start.go:360] acquireMachinesLock for calico-047000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:44:19.216694    9453 start.go:364] duration metric: took 27.458µs to acquireMachinesLock for "calico-047000"
	I0719 07:44:19.216705    9453 start.go:93] Provisioning new machine with config: &{Name:calico-047000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:44:19.216728    9453 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:44:19.225463    9453 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 07:44:19.240898    9453 start.go:159] libmachine.API.Create for "calico-047000" (driver="qemu2")
	I0719 07:44:19.240930    9453 client.go:168] LocalClient.Create starting
	I0719 07:44:19.240998    9453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:44:19.241030    9453 main.go:141] libmachine: Decoding PEM data...
	I0719 07:44:19.241041    9453 main.go:141] libmachine: Parsing certificate...
	I0719 07:44:19.241075    9453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:44:19.241100    9453 main.go:141] libmachine: Decoding PEM data...
	I0719 07:44:19.241109    9453 main.go:141] libmachine: Parsing certificate...
	I0719 07:44:19.241552    9453 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:44:19.368460    9453 main.go:141] libmachine: Creating SSH key...
	I0719 07:44:19.439985    9453 main.go:141] libmachine: Creating Disk image...
	I0719 07:44:19.439991    9453 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:44:19.440164    9453 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/calico-047000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/calico-047000/disk.qcow2
	I0719 07:44:19.449417    9453 main.go:141] libmachine: STDOUT: 
	I0719 07:44:19.449435    9453 main.go:141] libmachine: STDERR: 
	I0719 07:44:19.449488    9453 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/calico-047000/disk.qcow2 +20000M
	I0719 07:44:19.457429    9453 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:44:19.457442    9453 main.go:141] libmachine: STDERR: 
	I0719 07:44:19.457452    9453 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/calico-047000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/calico-047000/disk.qcow2
	I0719 07:44:19.457457    9453 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:44:19.457477    9453 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:44:19.457508    9453 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/calico-047000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/calico-047000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/calico-047000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:79:52:a7:72:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/calico-047000/disk.qcow2
	I0719 07:44:19.459096    9453 main.go:141] libmachine: STDOUT: 
	I0719 07:44:19.459120    9453 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:44:19.459143    9453 client.go:171] duration metric: took 218.211541ms to LocalClient.Create
	I0719 07:44:21.461325    9453 start.go:128] duration metric: took 2.244587584s to createHost
	I0719 07:44:21.461403    9453 start.go:83] releasing machines lock for "calico-047000", held for 2.244719792s
	W0719 07:44:21.461513    9453 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:44:21.472801    9453 out.go:177] * Deleting "calico-047000" in qemu2 ...
	W0719 07:44:21.495508    9453 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:44:21.495537    9453 start.go:729] Will try again in 5 seconds ...
	I0719 07:44:26.497764    9453 start.go:360] acquireMachinesLock for calico-047000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:44:26.498270    9453 start.go:364] duration metric: took 394.208µs to acquireMachinesLock for "calico-047000"
	I0719 07:44:26.498396    9453 start.go:93] Provisioning new machine with config: &{Name:calico-047000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:44:26.498687    9453 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:44:26.503555    9453 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 07:44:26.548488    9453 start.go:159] libmachine.API.Create for "calico-047000" (driver="qemu2")
	I0719 07:44:26.548543    9453 client.go:168] LocalClient.Create starting
	I0719 07:44:26.548693    9453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:44:26.548796    9453 main.go:141] libmachine: Decoding PEM data...
	I0719 07:44:26.548812    9453 main.go:141] libmachine: Parsing certificate...
	I0719 07:44:26.548854    9453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:44:26.548897    9453 main.go:141] libmachine: Decoding PEM data...
	I0719 07:44:26.548920    9453 main.go:141] libmachine: Parsing certificate...
	I0719 07:44:26.549696    9453 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:44:26.685037    9453 main.go:141] libmachine: Creating SSH key...
	I0719 07:44:26.851779    9453 main.go:141] libmachine: Creating Disk image...
	I0719 07:44:26.851791    9453 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:44:26.852040    9453 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/calico-047000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/calico-047000/disk.qcow2
	I0719 07:44:26.861304    9453 main.go:141] libmachine: STDOUT: 
	I0719 07:44:26.861326    9453 main.go:141] libmachine: STDERR: 
	I0719 07:44:26.861375    9453 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/calico-047000/disk.qcow2 +20000M
	I0719 07:44:26.869574    9453 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:44:26.869589    9453 main.go:141] libmachine: STDERR: 
	I0719 07:44:26.869603    9453 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/calico-047000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/calico-047000/disk.qcow2
	I0719 07:44:26.869607    9453 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:44:26.869619    9453 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:44:26.869663    9453 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/calico-047000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/calico-047000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/calico-047000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:77:65:eb:16:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/calico-047000/disk.qcow2
	I0719 07:44:26.871328    9453 main.go:141] libmachine: STDOUT: 
	I0719 07:44:26.871343    9453 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:44:26.871356    9453 client.go:171] duration metric: took 322.8095ms to LocalClient.Create
	I0719 07:44:28.873441    9453 start.go:128] duration metric: took 2.374751834s to createHost
	I0719 07:44:28.873528    9453 start.go:83] releasing machines lock for "calico-047000", held for 2.375207666s
	W0719 07:44:28.873724    9453 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-047000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-047000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:44:28.879250    9453 out.go:177] 
	W0719 07:44:28.883237    9453 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:44:28.883276    9453 out.go:239] * 
	* 
	W0719 07:44:28.884561    9453 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:44:28.894228    9453 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-047000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-047000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.87899925s)

                                                
                                                
-- stdout --
	* [custom-flannel-047000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-047000" primary control-plane node in "custom-flannel-047000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-047000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:44:31.213779    9574 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:44:31.213894    9574 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:44:31.213901    9574 out.go:304] Setting ErrFile to fd 2...
	I0719 07:44:31.213902    9574 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:44:31.214032    9574 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:44:31.215114    9574 out.go:298] Setting JSON to false
	I0719 07:44:31.231285    9574 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6240,"bootTime":1721394031,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:44:31.231348    9574 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:44:31.235337    9574 out.go:177] * [custom-flannel-047000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:44:31.243191    9574 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:44:31.243262    9574 notify.go:220] Checking for updates...
	I0719 07:44:31.250158    9574 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:44:31.253242    9574 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:44:31.256229    9574 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:44:31.259189    9574 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:44:31.262221    9574 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:44:31.265561    9574 config.go:182] Loaded profile config "multinode-023000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:44:31.265634    9574 config.go:182] Loaded profile config "stopped-upgrade-109000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:44:31.265674    9574 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:44:31.269174    9574 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:44:31.276220    9574 start.go:297] selected driver: qemu2
	I0719 07:44:31.276227    9574 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:44:31.276235    9574 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:44:31.278475    9574 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:44:31.281087    9574 out.go:177] * Automatically selected the socket_vmnet network
	I0719 07:44:31.284276    9574 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:44:31.284299    9574 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0719 07:44:31.284321    9574 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0719 07:44:31.284354    9574 start.go:340] cluster config:
	{Name:custom-flannel-047000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:44:31.288005    9574 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:44:31.295140    9574 out.go:177] * Starting "custom-flannel-047000" primary control-plane node in "custom-flannel-047000" cluster
	I0719 07:44:31.299134    9574 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:44:31.299147    9574 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:44:31.299156    9574 cache.go:56] Caching tarball of preloaded images
	I0719 07:44:31.299207    9574 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:44:31.299212    9574 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:44:31.299264    9574 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/custom-flannel-047000/config.json ...
	I0719 07:44:31.299276    9574 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/custom-flannel-047000/config.json: {Name:mkb88d6354b33ad3d45b599ea0fb9235de733699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:44:31.299566    9574 start.go:360] acquireMachinesLock for custom-flannel-047000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:44:31.299599    9574 start.go:364] duration metric: took 27.458µs to acquireMachinesLock for "custom-flannel-047000"
	I0719 07:44:31.299609    9574 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-047000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:44:31.299636    9574 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:44:31.308180    9574 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 07:44:31.325332    9574 start.go:159] libmachine.API.Create for "custom-flannel-047000" (driver="qemu2")
	I0719 07:44:31.325361    9574 client.go:168] LocalClient.Create starting
	I0719 07:44:31.325427    9574 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:44:31.325457    9574 main.go:141] libmachine: Decoding PEM data...
	I0719 07:44:31.325469    9574 main.go:141] libmachine: Parsing certificate...
	I0719 07:44:31.325504    9574 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:44:31.325526    9574 main.go:141] libmachine: Decoding PEM data...
	I0719 07:44:31.325532    9574 main.go:141] libmachine: Parsing certificate...
	I0719 07:44:31.325884    9574 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:44:31.453981    9574 main.go:141] libmachine: Creating SSH key...
	I0719 07:44:31.631170    9574 main.go:141] libmachine: Creating Disk image...
	I0719 07:44:31.631177    9574 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:44:31.631387    9574 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/custom-flannel-047000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/custom-flannel-047000/disk.qcow2
	I0719 07:44:31.641085    9574 main.go:141] libmachine: STDOUT: 
	I0719 07:44:31.641106    9574 main.go:141] libmachine: STDERR: 
	I0719 07:44:31.641158    9574 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/custom-flannel-047000/disk.qcow2 +20000M
	I0719 07:44:31.649153    9574 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:44:31.649170    9574 main.go:141] libmachine: STDERR: 
	I0719 07:44:31.649189    9574 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/custom-flannel-047000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/custom-flannel-047000/disk.qcow2
	I0719 07:44:31.649194    9574 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:44:31.649207    9574 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:44:31.649234    9574 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/custom-flannel-047000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/custom-flannel-047000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/custom-flannel-047000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:41:d9:73:26:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/custom-flannel-047000/disk.qcow2
	I0719 07:44:31.650873    9574 main.go:141] libmachine: STDOUT: 
	I0719 07:44:31.650920    9574 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:44:31.650936    9574 client.go:171] duration metric: took 325.572958ms to LocalClient.Create
	I0719 07:44:33.653170    9574 start.go:128] duration metric: took 2.353524125s to createHost
	I0719 07:44:33.653239    9574 start.go:83] releasing machines lock for "custom-flannel-047000", held for 2.353652333s
	W0719 07:44:33.653307    9574 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:44:33.664573    9574 out.go:177] * Deleting "custom-flannel-047000" in qemu2 ...
	W0719 07:44:33.686936    9574 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:44:33.686967    9574 start.go:729] Will try again in 5 seconds ...
	I0719 07:44:38.689250    9574 start.go:360] acquireMachinesLock for custom-flannel-047000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:44:38.689894    9574 start.go:364] duration metric: took 422.875µs to acquireMachinesLock for "custom-flannel-047000"
	I0719 07:44:38.689967    9574 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-047000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:44:38.690267    9574 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:44:38.700053    9574 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 07:44:38.751996    9574 start.go:159] libmachine.API.Create for "custom-flannel-047000" (driver="qemu2")
	I0719 07:44:38.752069    9574 client.go:168] LocalClient.Create starting
	I0719 07:44:38.752197    9574 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:44:38.752271    9574 main.go:141] libmachine: Decoding PEM data...
	I0719 07:44:38.752287    9574 main.go:141] libmachine: Parsing certificate...
	I0719 07:44:38.752356    9574 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:44:38.752401    9574 main.go:141] libmachine: Decoding PEM data...
	I0719 07:44:38.752413    9574 main.go:141] libmachine: Parsing certificate...
	I0719 07:44:38.752974    9574 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:44:38.893021    9574 main.go:141] libmachine: Creating SSH key...
	I0719 07:44:39.005556    9574 main.go:141] libmachine: Creating Disk image...
	I0719 07:44:39.005565    9574 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:44:39.005777    9574 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/custom-flannel-047000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/custom-flannel-047000/disk.qcow2
	I0719 07:44:39.015060    9574 main.go:141] libmachine: STDOUT: 
	I0719 07:44:39.015084    9574 main.go:141] libmachine: STDERR: 
	I0719 07:44:39.015143    9574 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/custom-flannel-047000/disk.qcow2 +20000M
	I0719 07:44:39.023269    9574 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:44:39.023289    9574 main.go:141] libmachine: STDERR: 
	I0719 07:44:39.023300    9574 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/custom-flannel-047000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/custom-flannel-047000/disk.qcow2
	I0719 07:44:39.023313    9574 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:44:39.023325    9574 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:44:39.023357    9574 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/custom-flannel-047000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/custom-flannel-047000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/custom-flannel-047000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:a2:a8:2f:35:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/custom-flannel-047000/disk.qcow2
	I0719 07:44:39.025115    9574 main.go:141] libmachine: STDOUT: 
	I0719 07:44:39.025136    9574 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:44:39.025149    9574 client.go:171] duration metric: took 273.077041ms to LocalClient.Create
	I0719 07:44:41.027334    9574 start.go:128] duration metric: took 2.337053375s to createHost
	I0719 07:44:41.027405    9574 start.go:83] releasing machines lock for "custom-flannel-047000", held for 2.337507708s
	W0719 07:44:41.027845    9574 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-047000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-047000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:44:41.036474    9574 out.go:177] 
	W0719 07:44:41.040431    9574 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:44:41.040448    9574 out.go:239] * 
	* 
	W0719 07:44:41.042550    9574 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:44:41.051240    9574 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-047000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-047000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.852261834s)

                                                
                                                
-- stdout --
	* [false-047000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-047000" primary control-plane node in "false-047000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-047000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:44:43.445689    9694 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:44:43.445827    9694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:44:43.445830    9694 out.go:304] Setting ErrFile to fd 2...
	I0719 07:44:43.445833    9694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:44:43.445962    9694 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:44:43.447073    9694 out.go:298] Setting JSON to false
	I0719 07:44:43.463824    9694 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6252,"bootTime":1721394031,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:44:43.463887    9694 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:44:43.468669    9694 out.go:177] * [false-047000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:44:43.476587    9694 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:44:43.476627    9694 notify.go:220] Checking for updates...
	I0719 07:44:43.482515    9694 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:44:43.485501    9694 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:44:43.486704    9694 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:44:43.489485    9694 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:44:43.492475    9694 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:44:43.495898    9694 config.go:182] Loaded profile config "multinode-023000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:44:43.495976    9694 config.go:182] Loaded profile config "stopped-upgrade-109000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:44:43.496029    9694 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:44:43.500421    9694 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:44:43.507518    9694 start.go:297] selected driver: qemu2
	I0719 07:44:43.507528    9694 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:44:43.507535    9694 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:44:43.509782    9694 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:44:43.512444    9694 out.go:177] * Automatically selected the socket_vmnet network
	I0719 07:44:43.515547    9694 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:44:43.515578    9694 cni.go:84] Creating CNI manager for "false"
	I0719 07:44:43.515607    9694 start.go:340] cluster config:
	{Name:false-047000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:44:43.519093    9694 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:44:43.526484    9694 out.go:177] * Starting "false-047000" primary control-plane node in "false-047000" cluster
	I0719 07:44:43.530534    9694 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:44:43.530552    9694 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:44:43.530563    9694 cache.go:56] Caching tarball of preloaded images
	I0719 07:44:43.530634    9694 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:44:43.530641    9694 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:44:43.530698    9694 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/false-047000/config.json ...
	I0719 07:44:43.530710    9694 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/false-047000/config.json: {Name:mk6a9ff8efcac843a814ec400040b4a4f867ccad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:44:43.531032    9694 start.go:360] acquireMachinesLock for false-047000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:44:43.531065    9694 start.go:364] duration metric: took 27.167µs to acquireMachinesLock for "false-047000"
	I0719 07:44:43.531074    9694 start.go:93] Provisioning new machine with config: &{Name:false-047000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:44:43.531098    9694 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:44:43.535487    9694 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 07:44:43.550383    9694 start.go:159] libmachine.API.Create for "false-047000" (driver="qemu2")
	I0719 07:44:43.550409    9694 client.go:168] LocalClient.Create starting
	I0719 07:44:43.550463    9694 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:44:43.550493    9694 main.go:141] libmachine: Decoding PEM data...
	I0719 07:44:43.550503    9694 main.go:141] libmachine: Parsing certificate...
	I0719 07:44:43.550538    9694 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:44:43.550561    9694 main.go:141] libmachine: Decoding PEM data...
	I0719 07:44:43.550571    9694 main.go:141] libmachine: Parsing certificate...
	I0719 07:44:43.551003    9694 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:44:43.677092    9694 main.go:141] libmachine: Creating SSH key...
	I0719 07:44:43.804913    9694 main.go:141] libmachine: Creating Disk image...
	I0719 07:44:43.804926    9694 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:44:43.805158    9694 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/false-047000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/false-047000/disk.qcow2
	I0719 07:44:43.814858    9694 main.go:141] libmachine: STDOUT: 
	I0719 07:44:43.814877    9694 main.go:141] libmachine: STDERR: 
	I0719 07:44:43.814922    9694 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/false-047000/disk.qcow2 +20000M
	I0719 07:44:43.822946    9694 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:44:43.822969    9694 main.go:141] libmachine: STDERR: 
	I0719 07:44:43.822991    9694 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/false-047000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/false-047000/disk.qcow2
	I0719 07:44:43.822996    9694 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:44:43.823009    9694 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:44:43.823032    9694 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/false-047000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/false-047000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/false-047000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:ee:42:77:5a:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/false-047000/disk.qcow2
	I0719 07:44:43.824677    9694 main.go:141] libmachine: STDOUT: 
	I0719 07:44:43.824693    9694 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:44:43.824710    9694 client.go:171] duration metric: took 274.300375ms to LocalClient.Create
	I0719 07:44:45.826926    9694 start.go:128] duration metric: took 2.295823458s to createHost
	I0719 07:44:45.827053    9694 start.go:83] releasing machines lock for "false-047000", held for 2.296002042s
	W0719 07:44:45.827130    9694 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:44:45.838897    9694 out.go:177] * Deleting "false-047000" in qemu2 ...
	W0719 07:44:45.858230    9694 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:44:45.858257    9694 start.go:729] Will try again in 5 seconds ...
	I0719 07:44:50.860420    9694 start.go:360] acquireMachinesLock for false-047000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:44:50.860931    9694 start.go:364] duration metric: took 427.417µs to acquireMachinesLock for "false-047000"
	I0719 07:44:50.861023    9694 start.go:93] Provisioning new machine with config: &{Name:false-047000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-047000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:44:50.861312    9694 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:44:50.870884    9694 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 07:44:50.919683    9694 start.go:159] libmachine.API.Create for "false-047000" (driver="qemu2")
	I0719 07:44:50.919744    9694 client.go:168] LocalClient.Create starting
	I0719 07:44:50.919861    9694 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:44:50.919922    9694 main.go:141] libmachine: Decoding PEM data...
	I0719 07:44:50.919940    9694 main.go:141] libmachine: Parsing certificate...
	I0719 07:44:50.920018    9694 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:44:50.920064    9694 main.go:141] libmachine: Decoding PEM data...
	I0719 07:44:50.920076    9694 main.go:141] libmachine: Parsing certificate...
	I0719 07:44:50.920596    9694 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:44:51.058442    9694 main.go:141] libmachine: Creating SSH key...
	I0719 07:44:51.208748    9694 main.go:141] libmachine: Creating Disk image...
	I0719 07:44:51.208758    9694 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:44:51.208989    9694 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/false-047000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/false-047000/disk.qcow2
	I0719 07:44:51.218770    9694 main.go:141] libmachine: STDOUT: 
	I0719 07:44:51.218787    9694 main.go:141] libmachine: STDERR: 
	I0719 07:44:51.218839    9694 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/false-047000/disk.qcow2 +20000M
	I0719 07:44:51.226793    9694 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:44:51.226808    9694 main.go:141] libmachine: STDERR: 
	I0719 07:44:51.226820    9694 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/false-047000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/false-047000/disk.qcow2
	I0719 07:44:51.226823    9694 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:44:51.226836    9694 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:44:51.226860    9694 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/false-047000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/false-047000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/false-047000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:b6:3d:11:c7:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/false-047000/disk.qcow2
	I0719 07:44:51.228533    9694 main.go:141] libmachine: STDOUT: 
	I0719 07:44:51.228549    9694 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:44:51.228562    9694 client.go:171] duration metric: took 308.816042ms to LocalClient.Create
	I0719 07:44:53.230214    9694 start.go:128] duration metric: took 2.36889s to createHost
	I0719 07:44:53.230290    9694 start.go:83] releasing machines lock for "false-047000", held for 2.369358209s
	W0719 07:44:53.230726    9694 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-047000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-047000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:44:53.239259    9694 out.go:177] 
	W0719 07:44:53.243400    9694 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:44:53.243443    9694 out.go:239] * 
	* 
	W0719 07:44:53.245784    9694 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:44:53.255234    9694 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-572000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-572000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.748812709s)

                                                
                                                
-- stdout --
	* [old-k8s-version-572000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-572000" primary control-plane node in "old-k8s-version-572000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-572000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:44:55.449789    9803 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:44:55.449920    9803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:44:55.449924    9803 out.go:304] Setting ErrFile to fd 2...
	I0719 07:44:55.449927    9803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:44:55.450063    9803 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:44:55.451187    9803 out.go:298] Setting JSON to false
	I0719 07:44:55.467809    9803 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6264,"bootTime":1721394031,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:44:55.467878    9803 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:44:55.473557    9803 out.go:177] * [old-k8s-version-572000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:44:55.479097    9803 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:44:55.479159    9803 notify.go:220] Checking for updates...
	I0719 07:44:55.486569    9803 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:44:55.489406    9803 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:44:55.492527    9803 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:44:55.495514    9803 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:44:55.496832    9803 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:44:55.499788    9803 config.go:182] Loaded profile config "multinode-023000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:44:55.499853    9803 config.go:182] Loaded profile config "stopped-upgrade-109000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:44:55.499920    9803 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:44:55.504442    9803 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:44:55.509540    9803 start.go:297] selected driver: qemu2
	I0719 07:44:55.509548    9803 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:44:55.509556    9803 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:44:55.511770    9803 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:44:55.514479    9803 out.go:177] * Automatically selected the socket_vmnet network
	I0719 07:44:55.517572    9803 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:44:55.517586    9803 cni.go:84] Creating CNI manager for ""
	I0719 07:44:55.517592    9803 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 07:44:55.517616    9803 start.go:340] cluster config:
	{Name:old-k8s-version-572000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-572000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:44:55.521083    9803 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:44:55.528565    9803 out.go:177] * Starting "old-k8s-version-572000" primary control-plane node in "old-k8s-version-572000" cluster
	I0719 07:44:55.532489    9803 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 07:44:55.532505    9803 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0719 07:44:55.532515    9803 cache.go:56] Caching tarball of preloaded images
	I0719 07:44:55.532576    9803 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:44:55.532581    9803 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0719 07:44:55.532639    9803 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/old-k8s-version-572000/config.json ...
	I0719 07:44:55.532650    9803 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/old-k8s-version-572000/config.json: {Name:mk289874c7897bd1d703d06c627d590abc8dc305 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:44:55.532925    9803 start.go:360] acquireMachinesLock for old-k8s-version-572000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:44:55.532954    9803 start.go:364] duration metric: took 24.209µs to acquireMachinesLock for "old-k8s-version-572000"
	I0719 07:44:55.532964    9803 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-572000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:44:55.532985    9803 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:44:55.537508    9803 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 07:44:55.552448    9803 start.go:159] libmachine.API.Create for "old-k8s-version-572000" (driver="qemu2")
	I0719 07:44:55.552472    9803 client.go:168] LocalClient.Create starting
	I0719 07:44:55.552550    9803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:44:55.552582    9803 main.go:141] libmachine: Decoding PEM data...
	I0719 07:44:55.552591    9803 main.go:141] libmachine: Parsing certificate...
	I0719 07:44:55.552625    9803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:44:55.552647    9803 main.go:141] libmachine: Decoding PEM data...
	I0719 07:44:55.552652    9803 main.go:141] libmachine: Parsing certificate...
	I0719 07:44:55.553047    9803 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:44:55.681866    9803 main.go:141] libmachine: Creating SSH key...
	I0719 07:44:55.783754    9803 main.go:141] libmachine: Creating Disk image...
	I0719 07:44:55.783763    9803 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:44:55.783970    9803 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/disk.qcow2
	I0719 07:44:55.793670    9803 main.go:141] libmachine: STDOUT: 
	I0719 07:44:55.793692    9803 main.go:141] libmachine: STDERR: 
	I0719 07:44:55.793752    9803 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/disk.qcow2 +20000M
	I0719 07:44:55.801827    9803 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:44:55.801840    9803 main.go:141] libmachine: STDERR: 
	I0719 07:44:55.801859    9803 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/disk.qcow2
	I0719 07:44:55.801864    9803 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:44:55.801879    9803 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:44:55.801912    9803 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:ad:74:29:10:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/disk.qcow2
	I0719 07:44:55.803535    9803 main.go:141] libmachine: STDOUT: 
	I0719 07:44:55.803549    9803 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:44:55.803567    9803 client.go:171] duration metric: took 251.094833ms to LocalClient.Create
	I0719 07:44:57.805767    9803 start.go:128] duration metric: took 2.272766958s to createHost
	I0719 07:44:57.805847    9803 start.go:83] releasing machines lock for "old-k8s-version-572000", held for 2.272904959s
	W0719 07:44:57.805986    9803 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:44:57.816289    9803 out.go:177] * Deleting "old-k8s-version-572000" in qemu2 ...
	W0719 07:44:57.837485    9803 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:44:57.837530    9803 start.go:729] Will try again in 5 seconds ...
	I0719 07:45:02.839640    9803 start.go:360] acquireMachinesLock for old-k8s-version-572000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:45:02.840257    9803 start.go:364] duration metric: took 496.25µs to acquireMachinesLock for "old-k8s-version-572000"
	I0719 07:45:02.840321    9803 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-572000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:45:02.840667    9803 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:45:02.851503    9803 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 07:45:02.893186    9803 start.go:159] libmachine.API.Create for "old-k8s-version-572000" (driver="qemu2")
	I0719 07:45:02.893238    9803 client.go:168] LocalClient.Create starting
	I0719 07:45:02.893372    9803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:45:02.893437    9803 main.go:141] libmachine: Decoding PEM data...
	I0719 07:45:02.893458    9803 main.go:141] libmachine: Parsing certificate...
	I0719 07:45:02.893535    9803 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:45:02.893582    9803 main.go:141] libmachine: Decoding PEM data...
	I0719 07:45:02.893596    9803 main.go:141] libmachine: Parsing certificate...
	I0719 07:45:02.894110    9803 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:45:03.029915    9803 main.go:141] libmachine: Creating SSH key...
	I0719 07:45:03.117999    9803 main.go:141] libmachine: Creating Disk image...
	I0719 07:45:03.118006    9803 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:45:03.118203    9803 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/disk.qcow2
	I0719 07:45:03.127502    9803 main.go:141] libmachine: STDOUT: 
	I0719 07:45:03.127522    9803 main.go:141] libmachine: STDERR: 
	I0719 07:45:03.127571    9803 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/disk.qcow2 +20000M
	I0719 07:45:03.135597    9803 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:45:03.135615    9803 main.go:141] libmachine: STDERR: 
	I0719 07:45:03.135626    9803 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/disk.qcow2
	I0719 07:45:03.135632    9803 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:45:03.135646    9803 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:45:03.135678    9803 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:c5:19:cd:b9:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/disk.qcow2
	I0719 07:45:03.137388    9803 main.go:141] libmachine: STDOUT: 
	I0719 07:45:03.137402    9803 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:45:03.137419    9803 client.go:171] duration metric: took 244.176791ms to LocalClient.Create
	I0719 07:45:05.139499    9803 start.go:128] duration metric: took 2.2988345s to createHost
	I0719 07:45:05.139527    9803 start.go:83] releasing machines lock for "old-k8s-version-572000", held for 2.299269417s
	W0719 07:45:05.139702    9803 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-572000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-572000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:45:05.146595    9803 out.go:177] 
	W0719 07:45:05.150589    9803 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:45:05.150598    9803 out.go:239] * 
	* 
	W0719 07:45:05.151413    9803 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:45:05.161545    9803 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-572000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-572000 -n old-k8s-version-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-572000 -n old-k8s-version-572000: exit status 7 (41.056ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-572000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-572000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-572000 create -f testdata/busybox.yaml: exit status 1 (27.567709ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-572000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-572000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-572000 -n old-k8s-version-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-572000 -n old-k8s-version-572000: exit status 7 (29.71925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-572000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-572000 -n old-k8s-version-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-572000 -n old-k8s-version-572000: exit status 7 (29.72125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-572000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-572000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-572000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-572000 describe deploy/metrics-server -n kube-system: exit status 1 (27.429167ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-572000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-572000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-572000 -n old-k8s-version-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-572000 -n old-k8s-version-572000: exit status 7 (29.731125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-572000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-572000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-572000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.19046375s)

                                                
                                                
-- stdout --
	* [old-k8s-version-572000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-572000" primary control-plane node in "old-k8s-version-572000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-572000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-572000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:45:08.856871    9855 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:45:08.856985    9855 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:08.856989    9855 out.go:304] Setting ErrFile to fd 2...
	I0719 07:45:08.856992    9855 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:08.857122    9855 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:45:08.858242    9855 out.go:298] Setting JSON to false
	I0719 07:45:08.874699    9855 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6277,"bootTime":1721394031,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:45:08.874769    9855 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:45:08.879921    9855 out.go:177] * [old-k8s-version-572000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:45:08.886974    9855 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:45:08.887034    9855 notify.go:220] Checking for updates...
	I0719 07:45:08.894909    9855 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:45:08.897944    9855 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:45:08.900839    9855 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:45:08.903904    9855 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:45:08.906909    9855 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:45:08.910074    9855 config.go:182] Loaded profile config "old-k8s-version-572000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0719 07:45:08.912831    9855 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0719 07:45:08.915942    9855 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:45:08.919839    9855 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 07:45:08.926923    9855 start.go:297] selected driver: qemu2
	I0719 07:45:08.926930    9855 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-572000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:45:08.926993    9855 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:45:08.929309    9855 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:45:08.929335    9855 cni.go:84] Creating CNI manager for ""
	I0719 07:45:08.929343    9855 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 07:45:08.929364    9855 start.go:340] cluster config:
	{Name:old-k8s-version-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-572000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:45:08.932815    9855 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:08.940845    9855 out.go:177] * Starting "old-k8s-version-572000" primary control-plane node in "old-k8s-version-572000" cluster
	I0719 07:45:08.943858    9855 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 07:45:08.943874    9855 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0719 07:45:08.943886    9855 cache.go:56] Caching tarball of preloaded images
	I0719 07:45:08.943958    9855 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:45:08.943964    9855 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0719 07:45:08.944021    9855 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/old-k8s-version-572000/config.json ...
	I0719 07:45:08.944396    9855 start.go:360] acquireMachinesLock for old-k8s-version-572000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:45:08.944425    9855 start.go:364] duration metric: took 23.542µs to acquireMachinesLock for "old-k8s-version-572000"
	I0719 07:45:08.944433    9855 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:45:08.944440    9855 fix.go:54] fixHost starting: 
	I0719 07:45:08.944547    9855 fix.go:112] recreateIfNeeded on old-k8s-version-572000: state=Stopped err=<nil>
	W0719 07:45:08.944555    9855 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:45:08.948911    9855 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-572000" ...
	I0719 07:45:08.955868    9855 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:45:08.955909    9855 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:c5:19:cd:b9:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/disk.qcow2
	I0719 07:45:08.957838    9855 main.go:141] libmachine: STDOUT: 
	I0719 07:45:08.957853    9855 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:45:08.957878    9855 fix.go:56] duration metric: took 13.438667ms for fixHost
	I0719 07:45:08.957883    9855 start.go:83] releasing machines lock for "old-k8s-version-572000", held for 13.453708ms
	W0719 07:45:08.957888    9855 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:45:08.957919    9855 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:45:08.957923    9855 start.go:729] Will try again in 5 seconds ...
	I0719 07:45:13.960053    9855 start.go:360] acquireMachinesLock for old-k8s-version-572000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:45:13.960504    9855 start.go:364] duration metric: took 334.792µs to acquireMachinesLock for "old-k8s-version-572000"
	I0719 07:45:13.960572    9855 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:45:13.960595    9855 fix.go:54] fixHost starting: 
	I0719 07:45:13.961322    9855 fix.go:112] recreateIfNeeded on old-k8s-version-572000: state=Stopped err=<nil>
	W0719 07:45:13.961349    9855 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:45:13.969961    9855 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-572000" ...
	I0719 07:45:13.974085    9855 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:45:13.974485    9855 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:c5:19:cd:b9:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/old-k8s-version-572000/disk.qcow2
	I0719 07:45:13.984521    9855 main.go:141] libmachine: STDOUT: 
	I0719 07:45:13.984584    9855 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:45:13.984652    9855 fix.go:56] duration metric: took 24.056583ms for fixHost
	I0719 07:45:13.984674    9855 start.go:83] releasing machines lock for "old-k8s-version-572000", held for 24.14725ms
	W0719 07:45:13.984832    9855 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-572000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-572000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:45:13.993016    9855 out.go:177] 
	W0719 07:45:13.997145    9855 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:45:13.997185    9855 out.go:239] * 
	* 
	W0719 07:45:13.999936    9855 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:45:14.007038    9855 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-572000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-572000 -n old-k8s-version-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-572000 -n old-k8s-version-572000: exit status 7 (59.871541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-572000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-572000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-572000 -n old-k8s-version-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-572000 -n old-k8s-version-572000: exit status 7 (31.072916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-572000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-572000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-572000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-572000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.602583ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-572000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-572000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-572000 -n old-k8s-version-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-572000 -n old-k8s-version-572000: exit status 7 (29.307792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-572000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-572000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-572000 -n old-k8s-version-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-572000 -n old-k8s-version-572000: exit status 7 (29.64125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-572000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-572000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-572000 --alsologtostderr -v=1: exit status 83 (42.815667ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-572000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-572000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:45:14.267207    9876 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:45:14.268072    9876 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:14.268076    9876 out.go:304] Setting ErrFile to fd 2...
	I0719 07:45:14.268078    9876 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:14.268221    9876 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:45:14.268445    9876 out.go:298] Setting JSON to false
	I0719 07:45:14.268452    9876 mustload.go:65] Loading cluster: old-k8s-version-572000
	I0719 07:45:14.268664    9876 config.go:182] Loaded profile config "old-k8s-version-572000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0719 07:45:14.273623    9876 out.go:177] * The control-plane node old-k8s-version-572000 host is not running: state=Stopped
	I0719 07:45:14.277365    9876 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-572000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-572000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-572000 -n old-k8s-version-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-572000 -n old-k8s-version-572000: exit status 7 (28.510333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-572000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-572000 -n old-k8s-version-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-572000 -n old-k8s-version-572000: exit status 7 (29.320542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-572000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-626000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-626000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.913970959s)

                                                
                                                
-- stdout --
	* [no-preload-626000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-626000" primary control-plane node in "no-preload-626000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-626000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:45:14.576225    9893 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:45:14.576602    9893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:14.576608    9893 out.go:304] Setting ErrFile to fd 2...
	I0719 07:45:14.576611    9893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:14.576806    9893 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:45:14.578193    9893 out.go:298] Setting JSON to false
	I0719 07:45:14.594816    9893 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6283,"bootTime":1721394031,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:45:14.594897    9893 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:45:14.599495    9893 out.go:177] * [no-preload-626000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:45:14.606443    9893 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:45:14.606487    9893 notify.go:220] Checking for updates...
	I0719 07:45:14.613479    9893 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:45:14.616474    9893 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:45:14.619448    9893 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:45:14.622470    9893 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:45:14.625376    9893 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:45:14.628766    9893 config.go:182] Loaded profile config "multinode-023000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:45:14.628832    9893 config.go:182] Loaded profile config "stopped-upgrade-109000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:45:14.628900    9893 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:45:14.632483    9893 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:45:14.639469    9893 start.go:297] selected driver: qemu2
	I0719 07:45:14.639475    9893 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:45:14.639480    9893 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:45:14.641686    9893 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:45:14.644442    9893 out.go:177] * Automatically selected the socket_vmnet network
	I0719 07:45:14.647506    9893 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:45:14.647545    9893 cni.go:84] Creating CNI manager for ""
	I0719 07:45:14.647555    9893 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:45:14.647559    9893 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 07:45:14.647605    9893 start.go:340] cluster config:
	{Name:no-preload-626000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-626000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:45:14.651100    9893 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:14.658493    9893 out.go:177] * Starting "no-preload-626000" primary control-plane node in "no-preload-626000" cluster
	I0719 07:45:14.662495    9893 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 07:45:14.662581    9893 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/no-preload-626000/config.json ...
	I0719 07:45:14.662600    9893 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/no-preload-626000/config.json: {Name:mk7488610fe638f6f814156d84438b6082f971ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:45:14.662624    9893 cache.go:107] acquiring lock: {Name:mk92593876cf6800835c6d9e9859b03602ce730b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:14.662646    9893 cache.go:107] acquiring lock: {Name:mkf3003f7035974ffc427649568ed473c2759a6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:14.662689    9893 cache.go:115] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0719 07:45:14.662698    9893 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 74.083µs
	I0719 07:45:14.662704    9893 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0719 07:45:14.662713    9893 cache.go:107] acquiring lock: {Name:mka74ce9d9c0b8cb47cc25ca1939934fe2e90fb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:14.662805    9893 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 07:45:14.662783    9893 cache.go:107] acquiring lock: {Name:mk3c68a58dcde4ce11ce2770ef1b7c4668edf4b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:14.662809    9893 cache.go:107] acquiring lock: {Name:mkfec80b3d5715f072ea815c90d9101666600225 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:14.662875    9893 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0719 07:45:14.662902    9893 cache.go:107] acquiring lock: {Name:mk933008f3332c490cceebe5b5a2004baca52e35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:14.662911    9893 start.go:360] acquireMachinesLock for no-preload-626000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:45:14.662902    9893 cache.go:107] acquiring lock: {Name:mkd61654fcf7fad48a1df10c9a28265b5f2c084b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:14.662976    9893 start.go:364] duration metric: took 58.833µs to acquireMachinesLock for "no-preload-626000"
	I0719 07:45:14.662926    9893 cache.go:107] acquiring lock: {Name:mkc34eb846364f10b2fe23786a00ca6a779b8fd2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:14.662989    9893 start.go:93] Provisioning new machine with config: &{Name:no-preload-626000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-626000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:45:14.663025    9893 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:45:14.663041    9893 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 07:45:14.663048    9893 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 07:45:14.663152    9893 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 07:45:14.663170    9893 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0719 07:45:14.663178    9893 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 07:45:14.666423    9893 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 07:45:14.674384    9893 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0719 07:45:14.674401    9893 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 07:45:14.675920    9893 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0719 07:45:14.676035    9893 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 07:45:14.676146    9893 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 07:45:14.676165    9893 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 07:45:14.676189    9893 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 07:45:14.682847    9893 start.go:159] libmachine.API.Create for "no-preload-626000" (driver="qemu2")
	I0719 07:45:14.682878    9893 client.go:168] LocalClient.Create starting
	I0719 07:45:14.682949    9893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:45:14.682984    9893 main.go:141] libmachine: Decoding PEM data...
	I0719 07:45:14.682994    9893 main.go:141] libmachine: Parsing certificate...
	I0719 07:45:14.683039    9893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:45:14.683061    9893 main.go:141] libmachine: Decoding PEM data...
	I0719 07:45:14.683069    9893 main.go:141] libmachine: Parsing certificate...
	I0719 07:45:14.683456    9893 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:45:14.815391    9893 main.go:141] libmachine: Creating SSH key...
	I0719 07:45:15.056340    9893 main.go:141] libmachine: Creating Disk image...
	I0719 07:45:15.056357    9893 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:45:15.056566    9893 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/disk.qcow2
	I0719 07:45:15.066185    9893 main.go:141] libmachine: STDOUT: 
	I0719 07:45:15.066201    9893 main.go:141] libmachine: STDERR: 
	I0719 07:45:15.066251    9893 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/disk.qcow2 +20000M
	I0719 07:45:15.074344    9893 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:45:15.074357    9893 main.go:141] libmachine: STDERR: 
	I0719 07:45:15.074372    9893 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/disk.qcow2
	I0719 07:45:15.074377    9893 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:45:15.074389    9893 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:45:15.074412    9893 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:f6:27:35:ba:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/disk.qcow2
	I0719 07:45:15.075276    9893 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0719 07:45:15.076378    9893 main.go:141] libmachine: STDOUT: 
	I0719 07:45:15.076404    9893 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:45:15.076414    9893 client.go:171] duration metric: took 393.53675ms to LocalClient.Create
	I0719 07:45:15.077119    9893 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0719 07:45:15.079582    9893 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0719 07:45:15.103215    9893 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0719 07:45:15.117612    9893 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0719 07:45:15.131287    9893 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0719 07:45:15.169340    9893 cache.go:162] opening:  /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0719 07:45:15.218082    9893 cache.go:157] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0719 07:45:15.218094    9893 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 555.35875ms
	I0719 07:45:15.218105    9893 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0719 07:45:17.076517    9893 start.go:128] duration metric: took 2.413506708s to createHost
	I0719 07:45:17.076545    9893 start.go:83] releasing machines lock for "no-preload-626000", held for 2.413586959s
	W0719 07:45:17.076561    9893 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:45:17.085414    9893 out.go:177] * Deleting "no-preload-626000" in qemu2 ...
	W0719 07:45:17.092740    9893 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:45:17.092747    9893 start.go:729] Will try again in 5 seconds ...
	I0719 07:45:17.714318    9893 cache.go:157] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0719 07:45:17.714357    9893 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.051617041s
	I0719 07:45:17.714369    9893 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0719 07:45:18.249700    9893 cache.go:157] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0719 07:45:18.249726    9893 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 3.586905375s
	I0719 07:45:18.249739    9893 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0719 07:45:18.514752    9893 cache.go:157] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0719 07:45:18.514799    9893 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 3.852008125s
	I0719 07:45:18.514821    9893 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0719 07:45:19.066924    9893 cache.go:157] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0719 07:45:19.066964    9893 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 4.404157875s
	I0719 07:45:19.066975    9893 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0719 07:45:19.146999    9893 cache.go:157] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0719 07:45:19.147018    9893 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 4.484434875s
	I0719 07:45:19.147029    9893 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0719 07:45:22.092846    9893 start.go:360] acquireMachinesLock for no-preload-626000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:45:22.093222    9893 start.go:364] duration metric: took 300.166µs to acquireMachinesLock for "no-preload-626000"
	I0719 07:45:22.093332    9893 start.go:93] Provisioning new machine with config: &{Name:no-preload-626000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-626000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:45:22.093546    9893 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:45:22.099101    9893 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 07:45:22.141426    9893 start.go:159] libmachine.API.Create for "no-preload-626000" (driver="qemu2")
	I0719 07:45:22.141478    9893 client.go:168] LocalClient.Create starting
	I0719 07:45:22.141586    9893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:45:22.141652    9893 main.go:141] libmachine: Decoding PEM data...
	I0719 07:45:22.141674    9893 main.go:141] libmachine: Parsing certificate...
	I0719 07:45:22.141763    9893 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:45:22.141802    9893 main.go:141] libmachine: Decoding PEM data...
	I0719 07:45:22.141817    9893 main.go:141] libmachine: Parsing certificate...
	I0719 07:45:22.142325    9893 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:45:22.278285    9893 main.go:141] libmachine: Creating SSH key...
	I0719 07:45:22.396633    9893 main.go:141] libmachine: Creating Disk image...
	I0719 07:45:22.396640    9893 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:45:22.396844    9893 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/disk.qcow2
	I0719 07:45:22.406912    9893 main.go:141] libmachine: STDOUT: 
	I0719 07:45:22.406955    9893 main.go:141] libmachine: STDERR: 
	I0719 07:45:22.407015    9893 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/disk.qcow2 +20000M
	I0719 07:45:22.415512    9893 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:45:22.415540    9893 main.go:141] libmachine: STDERR: 
	I0719 07:45:22.415557    9893 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/disk.qcow2
	I0719 07:45:22.415563    9893 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:45:22.415570    9893 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:45:22.415609    9893 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:6d:25:99:5d:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/disk.qcow2
	I0719 07:45:22.417561    9893 main.go:141] libmachine: STDOUT: 
	I0719 07:45:22.417585    9893 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:45:22.417601    9893 client.go:171] duration metric: took 276.120625ms to LocalClient.Create
	I0719 07:45:22.479228    9893 cache.go:157] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0719 07:45:22.479240    9893 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 7.816599792s
	I0719 07:45:22.479247    9893 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0719 07:45:22.479269    9893 cache.go:87] Successfully saved all images to host disk.
	I0719 07:45:24.419677    9893 start.go:128] duration metric: took 2.326132958s to createHost
	I0719 07:45:24.419705    9893 start.go:83] releasing machines lock for "no-preload-626000", held for 2.326493166s
	W0719 07:45:24.419778    9893 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-626000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-626000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:45:24.428518    9893 out.go:177] 
	W0719 07:45:24.432601    9893 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:45:24.432607    9893 out.go:239] * 
	* 
	W0719 07:45:24.433073    9893 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:45:24.446307    9893 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-626000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-626000 -n no-preload-626000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-626000 -n no-preload-626000: exit status 7 (37.169792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-626000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-626000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-626000 create -f testdata/busybox.yaml: exit status 1 (29.507667ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-626000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-626000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-626000 -n no-preload-626000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-626000 -n no-preload-626000: exit status 7 (30.638959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-626000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-626000 -n no-preload-626000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-626000 -n no-preload-626000: exit status 7 (29.219167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-626000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-626000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-626000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-626000 describe deploy/metrics-server -n kube-system: exit status 1 (28.567709ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-626000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-626000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-626000 -n no-preload-626000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-626000 -n no-preload-626000: exit status 7 (36.867375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-626000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-626000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-626000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.179765708s)

                                                
                                                
-- stdout --
	* [no-preload-626000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-626000" primary control-plane node in "no-preload-626000" cluster
	* Restarting existing qemu2 VM for "no-preload-626000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-626000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:45:28.284041    9978 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:45:28.284195    9978 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:28.284199    9978 out.go:304] Setting ErrFile to fd 2...
	I0719 07:45:28.284201    9978 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:28.284337    9978 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:45:28.285338    9978 out.go:298] Setting JSON to false
	I0719 07:45:28.301521    9978 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6297,"bootTime":1721394031,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:45:28.301591    9978 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:45:28.306308    9978 out.go:177] * [no-preload-626000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:45:28.313215    9978 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:45:28.313237    9978 notify.go:220] Checking for updates...
	I0719 07:45:28.320196    9978 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:45:28.323257    9978 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:45:28.326314    9978 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:45:28.329313    9978 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:45:28.332239    9978 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:45:28.335533    9978 config.go:182] Loaded profile config "no-preload-626000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0719 07:45:28.335794    9978 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:45:28.340311    9978 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 07:45:28.347239    9978 start.go:297] selected driver: qemu2
	I0719 07:45:28.347244    9978 start.go:901] validating driver "qemu2" against &{Name:no-preload-626000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-626000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:45:28.347295    9978 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:45:28.349769    9978 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:45:28.349794    9978 cni.go:84] Creating CNI manager for ""
	I0719 07:45:28.349806    9978 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:45:28.349831    9978 start.go:340] cluster config:
	{Name:no-preload-626000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-626000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:45:28.353390    9978 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:28.359271    9978 out.go:177] * Starting "no-preload-626000" primary control-plane node in "no-preload-626000" cluster
	I0719 07:45:28.363269    9978 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 07:45:28.363324    9978 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/no-preload-626000/config.json ...
	I0719 07:45:28.363347    9978 cache.go:107] acquiring lock: {Name:mk92593876cf6800835c6d9e9859b03602ce730b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:28.363402    9978 cache.go:115] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0719 07:45:28.363407    9978 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 62.375µs
	I0719 07:45:28.363427    9978 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0719 07:45:28.363437    9978 cache.go:107] acquiring lock: {Name:mkfec80b3d5715f072ea815c90d9101666600225 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:28.363473    9978 cache.go:115] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0719 07:45:28.363476    9978 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 40µs
	I0719 07:45:28.363479    9978 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0719 07:45:28.363453    9978 cache.go:107] acquiring lock: {Name:mk933008f3332c490cceebe5b5a2004baca52e35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:28.363486    9978 cache.go:107] acquiring lock: {Name:mkd61654fcf7fad48a1df10c9a28265b5f2c084b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:28.363485    9978 cache.go:107] acquiring lock: {Name:mka74ce9d9c0b8cb47cc25ca1939934fe2e90fb0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:28.363524    9978 cache.go:115] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0719 07:45:28.363528    9978 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 43.875µs
	I0719 07:45:28.363532    9978 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0719 07:45:28.363533    9978 cache.go:115] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0719 07:45:28.363540    9978 cache.go:107] acquiring lock: {Name:mkc34eb846364f10b2fe23786a00ca6a779b8fd2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:28.363542    9978 cache.go:107] acquiring lock: {Name:mk3c68a58dcde4ce11ce2770ef1b7c4668edf4b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:28.363534    9978 cache.go:115] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0719 07:45:28.363576    9978 cache.go:115] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0719 07:45:28.363581    9978 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 41.458µs
	I0719 07:45:28.363584    9978 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0719 07:45:28.363589    9978 cache.go:115] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0719 07:45:28.363541    9978 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 106.5µs
	I0719 07:45:28.363593    9978 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 51.625µs
	I0719 07:45:28.363597    9978 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0719 07:45:28.363599    9978 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0719 07:45:28.363594    9978 cache.go:107] acquiring lock: {Name:mkf3003f7035974ffc427649568ed473c2759a6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:28.363594    9978 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 95.042µs
	I0719 07:45:28.363635    9978 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0719 07:45:28.363671    9978 cache.go:115] /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0719 07:45:28.363677    9978 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 137.709µs
	I0719 07:45:28.363681    9978 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0719 07:45:28.363686    9978 cache.go:87] Successfully saved all images to host disk.
	I0719 07:45:28.363695    9978 start.go:360] acquireMachinesLock for no-preload-626000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:45:28.363723    9978 start.go:364] duration metric: took 22.584µs to acquireMachinesLock for "no-preload-626000"
	I0719 07:45:28.363730    9978 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:45:28.363736    9978 fix.go:54] fixHost starting: 
	I0719 07:45:28.363842    9978 fix.go:112] recreateIfNeeded on no-preload-626000: state=Stopped err=<nil>
	W0719 07:45:28.363850    9978 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:45:28.372310    9978 out.go:177] * Restarting existing qemu2 VM for "no-preload-626000" ...
	I0719 07:45:28.375303    9978 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:45:28.375342    9978 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:6d:25:99:5d:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/disk.qcow2
	I0719 07:45:28.377207    9978 main.go:141] libmachine: STDOUT: 
	I0719 07:45:28.377222    9978 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:45:28.377249    9978 fix.go:56] duration metric: took 13.51325ms for fixHost
	I0719 07:45:28.377256    9978 start.go:83] releasing machines lock for "no-preload-626000", held for 13.529541ms
	W0719 07:45:28.377262    9978 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:45:28.377288    9978 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:45:28.377292    9978 start.go:729] Will try again in 5 seconds ...
	I0719 07:45:33.379534    9978 start.go:360] acquireMachinesLock for no-preload-626000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:45:33.380029    9978 start.go:364] duration metric: took 398.542µs to acquireMachinesLock for "no-preload-626000"
	I0719 07:45:33.380195    9978 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:45:33.380216    9978 fix.go:54] fixHost starting: 
	I0719 07:45:33.380928    9978 fix.go:112] recreateIfNeeded on no-preload-626000: state=Stopped err=<nil>
	W0719 07:45:33.380954    9978 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:45:33.385223    9978 out.go:177] * Restarting existing qemu2 VM for "no-preload-626000" ...
	I0719 07:45:33.393312    9978 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:45:33.393550    9978 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:6d:25:99:5d:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/no-preload-626000/disk.qcow2
	I0719 07:45:33.403143    9978 main.go:141] libmachine: STDOUT: 
	I0719 07:45:33.403207    9978 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:45:33.403298    9978 fix.go:56] duration metric: took 23.0825ms for fixHost
	I0719 07:45:33.403317    9978 start.go:83] releasing machines lock for "no-preload-626000", held for 23.265166ms
	W0719 07:45:33.403518    9978 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-626000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-626000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:45:33.411289    9978 out.go:177] 
	W0719 07:45:33.414417    9978 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:45:33.414455    9978 out.go:239] * 
	* 
	W0719 07:45:33.416933    9978 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:45:33.423370    9978 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-626000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-626000 -n no-preload-626000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-626000 -n no-preload-626000: exit status 7 (67.406083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-626000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-626000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-626000 -n no-preload-626000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-626000 -n no-preload-626000: exit status 7 (32.551541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-626000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-626000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-626000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-626000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.008625ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-626000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-626000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-626000 -n no-preload-626000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-626000 -n no-preload-626000: exit status 7 (29.42975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-626000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-626000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-626000 -n no-preload-626000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-626000 -n no-preload-626000: exit status 7 (28.771916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-626000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-626000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-626000 --alsologtostderr -v=1: exit status 83 (42.669417ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-626000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-626000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:45:33.693621    9997 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:45:33.693766    9997 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:33.693770    9997 out.go:304] Setting ErrFile to fd 2...
	I0719 07:45:33.693772    9997 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:33.693908    9997 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:45:33.694110    9997 out.go:298] Setting JSON to false
	I0719 07:45:33.694118    9997 mustload.go:65] Loading cluster: no-preload-626000
	I0719 07:45:33.694308    9997 config.go:182] Loaded profile config "no-preload-626000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0719 07:45:33.698829    9997 out.go:177] * The control-plane node no-preload-626000 host is not running: state=Stopped
	I0719 07:45:33.702964    9997 out.go:177]   To start a cluster, run: "minikube start -p no-preload-626000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-626000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-626000 -n no-preload-626000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-626000 -n no-preload-626000: exit status 7 (29.586709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-626000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-626000 -n no-preload-626000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-626000 -n no-preload-626000: exit status 7 (29.651ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-626000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (11.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-120000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-120000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (11.497074084s)

                                                
                                                
-- stdout --
	* [embed-certs-120000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-120000" primary control-plane node in "embed-certs-120000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-120000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:45:34.008792   10014 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:45:34.008931   10014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:34.008934   10014 out.go:304] Setting ErrFile to fd 2...
	I0719 07:45:34.008942   10014 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:34.009063   10014 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:45:34.010119   10014 out.go:298] Setting JSON to false
	I0719 07:45:34.026782   10014 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6303,"bootTime":1721394031,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:45:34.026847   10014 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:45:34.031895   10014 out.go:177] * [embed-certs-120000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:45:34.039052   10014 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:45:34.039159   10014 notify.go:220] Checking for updates...
	I0719 07:45:34.046055   10014 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:45:34.049081   10014 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:45:34.052089   10014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:45:34.055069   10014 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:45:34.058078   10014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:45:34.059683   10014 config.go:182] Loaded profile config "multinode-023000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:45:34.059740   10014 config.go:182] Loaded profile config "stopped-upgrade-109000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0719 07:45:34.059788   10014 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:45:34.063995   10014 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:45:34.070855   10014 start.go:297] selected driver: qemu2
	I0719 07:45:34.070860   10014 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:45:34.070865   10014 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:45:34.073041   10014 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:45:34.076089   10014 out.go:177] * Automatically selected the socket_vmnet network
	I0719 07:45:34.079131   10014 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:45:34.079159   10014 cni.go:84] Creating CNI manager for ""
	I0719 07:45:34.079165   10014 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:45:34.079170   10014 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 07:45:34.079200   10014 start.go:340] cluster config:
	{Name:embed-certs-120000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:45:34.082865   10014 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:34.090065   10014 out.go:177] * Starting "embed-certs-120000" primary control-plane node in "embed-certs-120000" cluster
	I0719 07:45:34.094068   10014 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:45:34.094082   10014 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:45:34.094093   10014 cache.go:56] Caching tarball of preloaded images
	I0719 07:45:34.094147   10014 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:45:34.094151   10014 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:45:34.094202   10014 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/embed-certs-120000/config.json ...
	I0719 07:45:34.094213   10014 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/embed-certs-120000/config.json: {Name:mkbd9f21ac154f7ccc33076a0b7d951184e06ed9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:45:34.094512   10014 start.go:360] acquireMachinesLock for embed-certs-120000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:45:34.094545   10014 start.go:364] duration metric: took 25.458µs to acquireMachinesLock for "embed-certs-120000"
	I0719 07:45:34.094555   10014 start.go:93] Provisioning new machine with config: &{Name:embed-certs-120000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:45:34.094582   10014 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:45:34.103070   10014 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 07:45:34.119801   10014 start.go:159] libmachine.API.Create for "embed-certs-120000" (driver="qemu2")
	I0719 07:45:34.119826   10014 client.go:168] LocalClient.Create starting
	I0719 07:45:34.119891   10014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:45:34.119927   10014 main.go:141] libmachine: Decoding PEM data...
	I0719 07:45:34.119939   10014 main.go:141] libmachine: Parsing certificate...
	I0719 07:45:34.119977   10014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:45:34.120002   10014 main.go:141] libmachine: Decoding PEM data...
	I0719 07:45:34.120011   10014 main.go:141] libmachine: Parsing certificate...
	I0719 07:45:34.120489   10014 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:45:34.250812   10014 main.go:141] libmachine: Creating SSH key...
	I0719 07:45:34.615942   10014 main.go:141] libmachine: Creating Disk image...
	I0719 07:45:34.615955   10014 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:45:34.616201   10014 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/disk.qcow2
	I0719 07:45:34.625780   10014 main.go:141] libmachine: STDOUT: 
	I0719 07:45:34.625800   10014 main.go:141] libmachine: STDERR: 
	I0719 07:45:34.625865   10014 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/disk.qcow2 +20000M
	I0719 07:45:34.633882   10014 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:45:34.633902   10014 main.go:141] libmachine: STDERR: 
	I0719 07:45:34.633918   10014 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/disk.qcow2
	I0719 07:45:34.633923   10014 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:45:34.633941   10014 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:45:34.633968   10014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:64:69:e3:79:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/disk.qcow2
	I0719 07:45:34.635663   10014 main.go:141] libmachine: STDOUT: 
	I0719 07:45:34.635694   10014 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:45:34.635714   10014 client.go:171] duration metric: took 515.88975ms to LocalClient.Create
	I0719 07:45:36.637941   10014 start.go:128] duration metric: took 2.543352s to createHost
	I0719 07:45:36.638009   10014 start.go:83] releasing machines lock for "embed-certs-120000", held for 2.543478333s
	W0719 07:45:36.638081   10014 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:45:36.648988   10014 out.go:177] * Deleting "embed-certs-120000" in qemu2 ...
	W0719 07:45:36.672503   10014 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:45:36.672537   10014 start.go:729] Will try again in 5 seconds ...
	I0719 07:45:41.674657   10014 start.go:360] acquireMachinesLock for embed-certs-120000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:45:43.130572   10014 start.go:364] duration metric: took 1.455842125s to acquireMachinesLock for "embed-certs-120000"
	I0719 07:45:43.130736   10014 start.go:93] Provisioning new machine with config: &{Name:embed-certs-120000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:45:43.130992   10014 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:45:43.136577   10014 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 07:45:43.186178   10014 start.go:159] libmachine.API.Create for "embed-certs-120000" (driver="qemu2")
	I0719 07:45:43.186229   10014 client.go:168] LocalClient.Create starting
	I0719 07:45:43.186347   10014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:45:43.186416   10014 main.go:141] libmachine: Decoding PEM data...
	I0719 07:45:43.186434   10014 main.go:141] libmachine: Parsing certificate...
	I0719 07:45:43.186491   10014 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:45:43.186534   10014 main.go:141] libmachine: Decoding PEM data...
	I0719 07:45:43.186551   10014 main.go:141] libmachine: Parsing certificate...
	I0719 07:45:43.187120   10014 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:45:43.327475   10014 main.go:141] libmachine: Creating SSH key...
	I0719 07:45:43.416209   10014 main.go:141] libmachine: Creating Disk image...
	I0719 07:45:43.416218   10014 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:45:43.416436   10014 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/disk.qcow2
	I0719 07:45:43.425815   10014 main.go:141] libmachine: STDOUT: 
	I0719 07:45:43.425845   10014 main.go:141] libmachine: STDERR: 
	I0719 07:45:43.425909   10014 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/disk.qcow2 +20000M
	I0719 07:45:43.433816   10014 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:45:43.433830   10014 main.go:141] libmachine: STDERR: 
	I0719 07:45:43.433842   10014 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/disk.qcow2
	I0719 07:45:43.433848   10014 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:45:43.433863   10014 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:45:43.433897   10014 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:70:15:11:40:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/disk.qcow2
	I0719 07:45:43.435549   10014 main.go:141] libmachine: STDOUT: 
	I0719 07:45:43.435561   10014 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:45:43.435577   10014 client.go:171] duration metric: took 249.342875ms to LocalClient.Create
	I0719 07:45:45.436807   10014 start.go:128] duration metric: took 2.305749667s to createHost
	I0719 07:45:45.436938   10014 start.go:83] releasing machines lock for "embed-certs-120000", held for 2.306327125s
	W0719 07:45:45.437396   10014 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-120000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-120000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:45:45.444118   10014 out.go:177] 
	W0719 07:45:45.452238   10014 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:45:45.452297   10014 out.go:239] * 
	* 
	W0719 07:45:45.454943   10014 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:45:45.464186   10014 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-120000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-120000 -n embed-certs-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-120000 -n embed-certs-120000: exit status 7 (65.669584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (11.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-109000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-109000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.757216s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-109000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-109000" primary control-plane node in "default-k8s-diff-port-109000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-109000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:45:40.806796   10037 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:45:40.806916   10037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:40.806920   10037 out.go:304] Setting ErrFile to fd 2...
	I0719 07:45:40.806924   10037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:40.807041   10037 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:45:40.808108   10037 out.go:298] Setting JSON to false
	I0719 07:45:40.824259   10037 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6309,"bootTime":1721394031,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:45:40.824341   10037 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:45:40.826447   10037 out.go:177] * [default-k8s-diff-port-109000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:45:40.834152   10037 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:45:40.834205   10037 notify.go:220] Checking for updates...
	I0719 07:45:40.841108   10037 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:45:40.844102   10037 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:45:40.846979   10037 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:45:40.850080   10037 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:45:40.853134   10037 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:45:40.854862   10037 config.go:182] Loaded profile config "embed-certs-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:45:40.854922   10037 config.go:182] Loaded profile config "multinode-023000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:45:40.854970   10037 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:45:40.859084   10037 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:45:40.865962   10037 start.go:297] selected driver: qemu2
	I0719 07:45:40.865970   10037 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:45:40.865977   10037 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:45:40.868228   10037 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:45:40.871028   10037 out.go:177] * Automatically selected the socket_vmnet network
	I0719 07:45:40.874164   10037 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:45:40.874203   10037 cni.go:84] Creating CNI manager for ""
	I0719 07:45:40.874210   10037 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:45:40.874213   10037 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 07:45:40.874238   10037 start.go:340] cluster config:
	{Name:default-k8s-diff-port-109000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-109000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:45:40.877840   10037 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:40.885088   10037 out.go:177] * Starting "default-k8s-diff-port-109000" primary control-plane node in "default-k8s-diff-port-109000" cluster
	I0719 07:45:40.889073   10037 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:45:40.889086   10037 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:45:40.889095   10037 cache.go:56] Caching tarball of preloaded images
	I0719 07:45:40.889154   10037 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:45:40.889159   10037 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:45:40.889212   10037 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/default-k8s-diff-port-109000/config.json ...
	I0719 07:45:40.889224   10037 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/default-k8s-diff-port-109000/config.json: {Name:mk7e5361a621f573f24055ba4cb2780e523823fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:45:40.889535   10037 start.go:360] acquireMachinesLock for default-k8s-diff-port-109000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:45:40.889575   10037 start.go:364] duration metric: took 32.792µs to acquireMachinesLock for "default-k8s-diff-port-109000"
	I0719 07:45:40.889586   10037 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-109000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-109000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:45:40.889613   10037 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:45:40.898127   10037 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 07:45:40.915331   10037 start.go:159] libmachine.API.Create for "default-k8s-diff-port-109000" (driver="qemu2")
	I0719 07:45:40.915363   10037 client.go:168] LocalClient.Create starting
	I0719 07:45:40.915430   10037 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:45:40.915461   10037 main.go:141] libmachine: Decoding PEM data...
	I0719 07:45:40.915475   10037 main.go:141] libmachine: Parsing certificate...
	I0719 07:45:40.915508   10037 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:45:40.915533   10037 main.go:141] libmachine: Decoding PEM data...
	I0719 07:45:40.915539   10037 main.go:141] libmachine: Parsing certificate...
	I0719 07:45:40.915942   10037 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:45:41.030731   10037 main.go:141] libmachine: Creating SSH key...
	I0719 07:45:41.108704   10037 main.go:141] libmachine: Creating Disk image...
	I0719 07:45:41.108709   10037 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:45:41.108887   10037 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/disk.qcow2
	I0719 07:45:41.118198   10037 main.go:141] libmachine: STDOUT: 
	I0719 07:45:41.118214   10037 main.go:141] libmachine: STDERR: 
	I0719 07:45:41.118261   10037 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/disk.qcow2 +20000M
	I0719 07:45:41.126230   10037 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:45:41.126251   10037 main.go:141] libmachine: STDERR: 
	I0719 07:45:41.126270   10037 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/disk.qcow2
	I0719 07:45:41.126276   10037 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:45:41.126286   10037 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:45:41.126312   10037 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:d9:fc:9e:21:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/disk.qcow2
	I0719 07:45:41.128008   10037 main.go:141] libmachine: STDOUT: 
	I0719 07:45:41.128022   10037 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:45:41.128039   10037 client.go:171] duration metric: took 212.675167ms to LocalClient.Create
	I0719 07:45:43.130326   10037 start.go:128] duration metric: took 2.240697375s to createHost
	I0719 07:45:43.130417   10037 start.go:83] releasing machines lock for "default-k8s-diff-port-109000", held for 2.240853584s
	W0719 07:45:43.130472   10037 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:45:43.146535   10037 out.go:177] * Deleting "default-k8s-diff-port-109000" in qemu2 ...
	W0719 07:45:43.162315   10037 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:45:43.162346   10037 start.go:729] Will try again in 5 seconds ...
	I0719 07:45:48.164504   10037 start.go:360] acquireMachinesLock for default-k8s-diff-port-109000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:45:48.165014   10037 start.go:364] duration metric: took 349.875µs to acquireMachinesLock for "default-k8s-diff-port-109000"
	I0719 07:45:48.165087   10037 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-109000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-109000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:45:48.165361   10037 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:45:48.171071   10037 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 07:45:48.222842   10037 start.go:159] libmachine.API.Create for "default-k8s-diff-port-109000" (driver="qemu2")
	I0719 07:45:48.222897   10037 client.go:168] LocalClient.Create starting
	I0719 07:45:48.223008   10037 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:45:48.223052   10037 main.go:141] libmachine: Decoding PEM data...
	I0719 07:45:48.223068   10037 main.go:141] libmachine: Parsing certificate...
	I0719 07:45:48.223138   10037 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:45:48.223168   10037 main.go:141] libmachine: Decoding PEM data...
	I0719 07:45:48.223183   10037 main.go:141] libmachine: Parsing certificate...
	I0719 07:45:48.223766   10037 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:45:48.351384   10037 main.go:141] libmachine: Creating SSH key...
	I0719 07:45:48.467220   10037 main.go:141] libmachine: Creating Disk image...
	I0719 07:45:48.467226   10037 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:45:48.467407   10037 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/disk.qcow2
	I0719 07:45:48.476765   10037 main.go:141] libmachine: STDOUT: 
	I0719 07:45:48.476784   10037 main.go:141] libmachine: STDERR: 
	I0719 07:45:48.476835   10037 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/disk.qcow2 +20000M
	I0719 07:45:48.484577   10037 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:45:48.484593   10037 main.go:141] libmachine: STDERR: 
	I0719 07:45:48.484603   10037 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/disk.qcow2
	I0719 07:45:48.484609   10037 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:45:48.484620   10037 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:45:48.484652   10037 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:4a:80:8e:fb:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/disk.qcow2
	I0719 07:45:48.486314   10037 main.go:141] libmachine: STDOUT: 
	I0719 07:45:48.486330   10037 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:45:48.486341   10037 client.go:171] duration metric: took 263.437542ms to LocalClient.Create
	I0719 07:45:50.488588   10037 start.go:128] duration metric: took 2.3231635s to createHost
	I0719 07:45:50.488641   10037 start.go:83] releasing machines lock for "default-k8s-diff-port-109000", held for 2.323622875s
	W0719 07:45:50.488904   10037 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-109000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-109000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:45:50.498480   10037 out.go:177] 
	W0719 07:45:50.509510   10037 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:45:50.509536   10037 out.go:239] * 
	* 
	W0719 07:45:50.512726   10037 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:45:50.521507   10037 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-109000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-109000 -n default-k8s-diff-port-109000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-109000 -n default-k8s-diff-port-109000: exit status 7 (67.002583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-109000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-120000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-120000 create -f testdata/busybox.yaml: exit status 1 (29.670667ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-120000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-120000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-120000 -n embed-certs-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-120000 -n embed-certs-120000: exit status 7 (28.783417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-120000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-120000 -n embed-certs-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-120000 -n embed-certs-120000: exit status 7 (29.114542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-120000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-120000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-120000 describe deploy/metrics-server -n kube-system: exit status 1 (26.807084ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-120000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-120000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-120000 -n embed-certs-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-120000 -n embed-certs-120000: exit status 7 (28.212667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-120000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-120000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.897971125s)

                                                
                                                
-- stdout --
	* [embed-certs-120000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-120000" primary control-plane node in "embed-certs-120000" cluster
	* Restarting existing qemu2 VM for "embed-certs-120000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-120000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:45:49.713796   10091 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:45:49.713922   10091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:49.713925   10091 out.go:304] Setting ErrFile to fd 2...
	I0719 07:45:49.713928   10091 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:49.714064   10091 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:45:49.715008   10091 out.go:298] Setting JSON to false
	I0719 07:45:49.731228   10091 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6318,"bootTime":1721394031,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:45:49.731301   10091 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:45:49.735251   10091 out.go:177] * [embed-certs-120000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:45:49.742080   10091 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:45:49.742137   10091 notify.go:220] Checking for updates...
	I0719 07:45:49.749204   10091 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:45:49.750556   10091 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:45:49.753201   10091 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:45:49.756221   10091 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:45:49.759259   10091 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:45:49.762498   10091 config.go:182] Loaded profile config "embed-certs-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:45:49.762762   10091 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:45:49.767174   10091 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 07:45:49.774164   10091 start.go:297] selected driver: qemu2
	I0719 07:45:49.774170   10091 start.go:901] validating driver "qemu2" against &{Name:embed-certs-120000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:45:49.774243   10091 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:45:49.776408   10091 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:45:49.776430   10091 cni.go:84] Creating CNI manager for ""
	I0719 07:45:49.776440   10091 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:45:49.776468   10091 start.go:340] cluster config:
	{Name:embed-certs-120000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-120000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:45:49.779812   10091 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:49.786161   10091 out.go:177] * Starting "embed-certs-120000" primary control-plane node in "embed-certs-120000" cluster
	I0719 07:45:49.790144   10091 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:45:49.790157   10091 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:45:49.790166   10091 cache.go:56] Caching tarball of preloaded images
	I0719 07:45:49.790232   10091 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:45:49.790237   10091 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:45:49.790290   10091 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/embed-certs-120000/config.json ...
	I0719 07:45:49.790678   10091 start.go:360] acquireMachinesLock for embed-certs-120000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:45:50.488777   10091 start.go:364] duration metric: took 698.05375ms to acquireMachinesLock for "embed-certs-120000"
	I0719 07:45:50.488942   10091 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:45:50.488975   10091 fix.go:54] fixHost starting: 
	I0719 07:45:50.489664   10091 fix.go:112] recreateIfNeeded on embed-certs-120000: state=Stopped err=<nil>
	W0719 07:45:50.489705   10091 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:45:50.506522   10091 out.go:177] * Restarting existing qemu2 VM for "embed-certs-120000" ...
	I0719 07:45:50.513564   10091 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:45:50.513772   10091 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:70:15:11:40:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/disk.qcow2
	I0719 07:45:50.523343   10091 main.go:141] libmachine: STDOUT: 
	I0719 07:45:50.523453   10091 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:45:50.523606   10091 fix.go:56] duration metric: took 34.627916ms for fixHost
	I0719 07:45:50.523629   10091 start.go:83] releasing machines lock for "embed-certs-120000", held for 34.816375ms
	W0719 07:45:50.523659   10091 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:45:50.523859   10091 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:45:50.523883   10091 start.go:729] Will try again in 5 seconds ...
	I0719 07:45:55.526001   10091 start.go:360] acquireMachinesLock for embed-certs-120000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:45:55.526327   10091 start.go:364] duration metric: took 237.25µs to acquireMachinesLock for "embed-certs-120000"
	I0719 07:45:55.526399   10091 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:45:55.526409   10091 fix.go:54] fixHost starting: 
	I0719 07:45:55.526835   10091 fix.go:112] recreateIfNeeded on embed-certs-120000: state=Stopped err=<nil>
	W0719 07:45:55.526849   10091 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:45:55.535061   10091 out.go:177] * Restarting existing qemu2 VM for "embed-certs-120000" ...
	I0719 07:45:55.539137   10091 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:45:55.539371   10091 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:70:15:11:40:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/embed-certs-120000/disk.qcow2
	I0719 07:45:55.549052   10091 main.go:141] libmachine: STDOUT: 
	I0719 07:45:55.549111   10091 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:45:55.549199   10091 fix.go:56] duration metric: took 22.788208ms for fixHost
	I0719 07:45:55.549221   10091 start.go:83] releasing machines lock for "embed-certs-120000", held for 22.864125ms
	W0719 07:45:55.549465   10091 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-120000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-120000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:45:55.556035   10091 out.go:177] 
	W0719 07:45:55.560118   10091 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:45:55.560155   10091 out.go:239] * 
	* 
	W0719 07:45:55.561722   10091 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:45:55.571074   10091 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-120000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-120000 -n embed-certs-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-120000 -n embed-certs-120000: exit status 7 (65.083417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-109000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-109000 create -f testdata/busybox.yaml: exit status 1 (30.306167ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-109000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-109000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-109000 -n default-k8s-diff-port-109000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-109000 -n default-k8s-diff-port-109000: exit status 7 (28.243459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-109000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-109000 -n default-k8s-diff-port-109000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-109000 -n default-k8s-diff-port-109000: exit status 7 (27.906167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-109000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-109000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-109000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-109000 describe deploy/metrics-server -n kube-system: exit status 1 (26.785667ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-109000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-109000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-109000 -n default-k8s-diff-port-109000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-109000 -n default-k8s-diff-port-109000: exit status 7 (28.022791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-109000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-109000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-109000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.18219325s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-109000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-109000" primary control-plane node in "default-k8s-diff-port-109000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-109000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-109000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:45:53.648199   10134 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:45:53.648328   10134 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:53.648331   10134 out.go:304] Setting ErrFile to fd 2...
	I0719 07:45:53.648337   10134 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:53.648459   10134 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:45:53.649623   10134 out.go:298] Setting JSON to false
	I0719 07:45:53.665936   10134 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6322,"bootTime":1721394031,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:45:53.665998   10134 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:45:53.671090   10134 out.go:177] * [default-k8s-diff-port-109000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:45:53.678123   10134 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:45:53.678170   10134 notify.go:220] Checking for updates...
	I0719 07:45:53.686085   10134 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:45:53.687327   10134 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:45:53.690056   10134 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:45:53.693088   10134 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:45:53.696080   10134 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:45:53.699369   10134 config.go:182] Loaded profile config "default-k8s-diff-port-109000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:45:53.699635   10134 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:45:53.704029   10134 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 07:45:53.711084   10134 start.go:297] selected driver: qemu2
	I0719 07:45:53.711092   10134 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-109000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-109000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:45:53.711162   10134 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:45:53.713434   10134 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 07:45:53.713473   10134 cni.go:84] Creating CNI manager for ""
	I0719 07:45:53.713480   10134 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:45:53.713503   10134 start.go:340] cluster config:
	{Name:default-k8s-diff-port-109000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-109000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:45:53.716907   10134 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:53.723035   10134 out.go:177] * Starting "default-k8s-diff-port-109000" primary control-plane node in "default-k8s-diff-port-109000" cluster
	I0719 07:45:53.727050   10134 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:45:53.727066   10134 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:45:53.727082   10134 cache.go:56] Caching tarball of preloaded images
	I0719 07:45:53.727145   10134 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:45:53.727151   10134 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:45:53.727215   10134 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/default-k8s-diff-port-109000/config.json ...
	I0719 07:45:53.727607   10134 start.go:360] acquireMachinesLock for default-k8s-diff-port-109000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:45:53.727634   10134 start.go:364] duration metric: took 21.958µs to acquireMachinesLock for "default-k8s-diff-port-109000"
	I0719 07:45:53.727642   10134 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:45:53.727647   10134 fix.go:54] fixHost starting: 
	I0719 07:45:53.727764   10134 fix.go:112] recreateIfNeeded on default-k8s-diff-port-109000: state=Stopped err=<nil>
	W0719 07:45:53.727772   10134 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:45:53.732051   10134 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-109000" ...
	I0719 07:45:53.740090   10134 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:45:53.740124   10134 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:4a:80:8e:fb:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/disk.qcow2
	I0719 07:45:53.742131   10134 main.go:141] libmachine: STDOUT: 
	I0719 07:45:53.742147   10134 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:45:53.742174   10134 fix.go:56] duration metric: took 14.526416ms for fixHost
	I0719 07:45:53.742179   10134 start.go:83] releasing machines lock for "default-k8s-diff-port-109000", held for 14.540375ms
	W0719 07:45:53.742186   10134 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:45:53.742222   10134 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:45:53.742226   10134 start.go:729] Will try again in 5 seconds ...
	I0719 07:45:58.744457   10134 start.go:360] acquireMachinesLock for default-k8s-diff-port-109000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:45:58.744929   10134 start.go:364] duration metric: took 331.375µs to acquireMachinesLock for "default-k8s-diff-port-109000"
	I0719 07:45:58.745056   10134 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:45:58.745078   10134 fix.go:54] fixHost starting: 
	I0719 07:45:58.745785   10134 fix.go:112] recreateIfNeeded on default-k8s-diff-port-109000: state=Stopped err=<nil>
	W0719 07:45:58.745814   10134 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:45:58.751265   10134 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-109000" ...
	I0719 07:45:58.755207   10134 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:45:58.755475   10134 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:4a:80:8e:fb:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/default-k8s-diff-port-109000/disk.qcow2
	I0719 07:45:58.765000   10134 main.go:141] libmachine: STDOUT: 
	I0719 07:45:58.765075   10134 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:45:58.765146   10134 fix.go:56] duration metric: took 20.073875ms for fixHost
	I0719 07:45:58.765162   10134 start.go:83] releasing machines lock for "default-k8s-diff-port-109000", held for 20.21175ms
	W0719 07:45:58.765381   10134 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-109000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-109000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:45:58.775181   10134 out.go:177] 
	W0719 07:45:58.778324   10134 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:45:58.778346   10134 out.go:239] * 
	* 
	W0719 07:45:58.781043   10134 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:45:58.789165   10134 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-109000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-109000 -n default-k8s-diff-port-109000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-109000 -n default-k8s-diff-port-109000: exit status 7 (64.129042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-109000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-120000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-120000 -n embed-certs-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-120000 -n embed-certs-120000: exit status 7 (32.697167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-120000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-120000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-120000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.587334ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-120000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-120000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-120000 -n embed-certs-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-120000 -n embed-certs-120000: exit status 7 (29.922917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-120000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-120000 -n embed-certs-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-120000 -n embed-certs-120000: exit status 7 (28.751959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-120000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-120000 --alsologtostderr -v=1: exit status 83 (39.831584ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-120000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-120000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:45:55.838494   10153 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:45:55.838635   10153 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:55.838639   10153 out.go:304] Setting ErrFile to fd 2...
	I0719 07:45:55.838641   10153 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:55.838772   10153 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:45:55.838986   10153 out.go:298] Setting JSON to false
	I0719 07:45:55.838993   10153 mustload.go:65] Loading cluster: embed-certs-120000
	I0719 07:45:55.839196   10153 config.go:182] Loaded profile config "embed-certs-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:45:55.842923   10153 out.go:177] * The control-plane node embed-certs-120000 host is not running: state=Stopped
	I0719 07:45:55.846884   10153 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-120000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-120000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-120000 -n embed-certs-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-120000 -n embed-certs-120000: exit status 7 (29.436667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-120000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-120000 -n embed-certs-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-120000 -n embed-certs-120000: exit status 7 (29.921625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-120000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-038000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-038000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.8636955s)

                                                
                                                
-- stdout --
	* [newest-cni-038000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-038000" primary control-plane node in "newest-cni-038000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-038000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:45:56.150788   10170 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:45:56.150925   10170 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:56.150929   10170 out.go:304] Setting ErrFile to fd 2...
	I0719 07:45:56.150931   10170 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:56.151083   10170 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:45:56.152247   10170 out.go:298] Setting JSON to false
	I0719 07:45:56.169833   10170 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6325,"bootTime":1721394031,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:45:56.169907   10170 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:45:56.173888   10170 out.go:177] * [newest-cni-038000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:45:56.181905   10170 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:45:56.182005   10170 notify.go:220] Checking for updates...
	I0719 07:45:56.189849   10170 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:45:56.197977   10170 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:45:56.201890   10170 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:45:56.209895   10170 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:45:56.213968   10170 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:45:56.217295   10170 config.go:182] Loaded profile config "default-k8s-diff-port-109000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:45:56.217371   10170 config.go:182] Loaded profile config "multinode-023000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:45:56.217430   10170 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:45:56.221747   10170 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 07:45:56.228897   10170 start.go:297] selected driver: qemu2
	I0719 07:45:56.228904   10170 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:45:56.228910   10170 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:45:56.231155   10170 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0719 07:45:56.231178   10170 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0719 07:45:56.234897   10170 out.go:177] * Automatically selected the socket_vmnet network
	I0719 07:45:56.238916   10170 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0719 07:45:56.238946   10170 cni.go:84] Creating CNI manager for ""
	I0719 07:45:56.238954   10170 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:45:56.238959   10170 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 07:45:56.239000   10170 start.go:340] cluster config:
	{Name:newest-cni-038000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-038000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:45:56.242745   10170 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:45:56.250908   10170 out.go:177] * Starting "newest-cni-038000" primary control-plane node in "newest-cni-038000" cluster
	I0719 07:45:56.253932   10170 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 07:45:56.253946   10170 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0719 07:45:56.253956   10170 cache.go:56] Caching tarball of preloaded images
	I0719 07:45:56.254020   10170 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:45:56.254026   10170 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0719 07:45:56.254084   10170 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/newest-cni-038000/config.json ...
	I0719 07:45:56.254096   10170 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/newest-cni-038000/config.json: {Name:mkde67e7b1f8c700abbdf17d4be826b0c49eaad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:45:56.254340   10170 start.go:360] acquireMachinesLock for newest-cni-038000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:45:56.254373   10170 start.go:364] duration metric: took 27.166µs to acquireMachinesLock for "newest-cni-038000"
	I0719 07:45:56.254383   10170 start.go:93] Provisioning new machine with config: &{Name:newest-cni-038000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-038000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:45:56.254413   10170 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:45:56.261944   10170 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 07:45:56.280637   10170 start.go:159] libmachine.API.Create for "newest-cni-038000" (driver="qemu2")
	I0719 07:45:56.280673   10170 client.go:168] LocalClient.Create starting
	I0719 07:45:56.280742   10170 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:45:56.280773   10170 main.go:141] libmachine: Decoding PEM data...
	I0719 07:45:56.280783   10170 main.go:141] libmachine: Parsing certificate...
	I0719 07:45:56.280825   10170 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:45:56.280849   10170 main.go:141] libmachine: Decoding PEM data...
	I0719 07:45:56.280855   10170 main.go:141] libmachine: Parsing certificate...
	I0719 07:45:56.281226   10170 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:45:56.410276   10170 main.go:141] libmachine: Creating SSH key...
	I0719 07:45:56.619434   10170 main.go:141] libmachine: Creating Disk image...
	I0719 07:45:56.619441   10170 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:45:56.619656   10170 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/disk.qcow2
	I0719 07:45:56.629474   10170 main.go:141] libmachine: STDOUT: 
	I0719 07:45:56.629490   10170 main.go:141] libmachine: STDERR: 
	I0719 07:45:56.629535   10170 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/disk.qcow2 +20000M
	I0719 07:45:56.637532   10170 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:45:56.637546   10170 main.go:141] libmachine: STDERR: 
	I0719 07:45:56.637562   10170 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/disk.qcow2
	I0719 07:45:56.637565   10170 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:45:56.637593   10170 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:45:56.637616   10170 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:e3:2e:6a:36:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/disk.qcow2
	I0719 07:45:56.639203   10170 main.go:141] libmachine: STDOUT: 
	I0719 07:45:56.639225   10170 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:45:56.639241   10170 client.go:171] duration metric: took 358.567208ms to LocalClient.Create
	I0719 07:45:58.641399   10170 start.go:128] duration metric: took 2.3869895s to createHost
	I0719 07:45:58.641469   10170 start.go:83] releasing machines lock for "newest-cni-038000", held for 2.38710975s
	W0719 07:45:58.641512   10170 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:45:58.654503   10170 out.go:177] * Deleting "newest-cni-038000" in qemu2 ...
	W0719 07:45:58.683047   10170 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:45:58.683085   10170 start.go:729] Will try again in 5 seconds ...
	I0719 07:46:03.685224   10170 start.go:360] acquireMachinesLock for newest-cni-038000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:46:03.685695   10170 start.go:364] duration metric: took 378.334µs to acquireMachinesLock for "newest-cni-038000"
	I0719 07:46:03.685859   10170 start.go:93] Provisioning new machine with config: &{Name:newest-cni-038000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-038000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 07:46:03.686155   10170 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 07:46:03.690850   10170 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 07:46:03.741530   10170 start.go:159] libmachine.API.Create for "newest-cni-038000" (driver="qemu2")
	I0719 07:46:03.741588   10170 client.go:168] LocalClient.Create starting
	I0719 07:46:03.741715   10170 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/ca.pem
	I0719 07:46:03.741785   10170 main.go:141] libmachine: Decoding PEM data...
	I0719 07:46:03.741800   10170 main.go:141] libmachine: Parsing certificate...
	I0719 07:46:03.741864   10170 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-5980/.minikube/certs/cert.pem
	I0719 07:46:03.741909   10170 main.go:141] libmachine: Decoding PEM data...
	I0719 07:46:03.741926   10170 main.go:141] libmachine: Parsing certificate...
	I0719 07:46:03.742456   10170 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0719 07:46:03.869498   10170 main.go:141] libmachine: Creating SSH key...
	I0719 07:46:03.920802   10170 main.go:141] libmachine: Creating Disk image...
	I0719 07:46:03.920808   10170 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 07:46:03.920998   10170 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/disk.qcow2.raw /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/disk.qcow2
	I0719 07:46:03.930385   10170 main.go:141] libmachine: STDOUT: 
	I0719 07:46:03.930403   10170 main.go:141] libmachine: STDERR: 
	I0719 07:46:03.930495   10170 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/disk.qcow2 +20000M
	I0719 07:46:03.938344   10170 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 07:46:03.938361   10170 main.go:141] libmachine: STDERR: 
	I0719 07:46:03.938371   10170 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/disk.qcow2
	I0719 07:46:03.938374   10170 main.go:141] libmachine: Starting QEMU VM...
	I0719 07:46:03.938386   10170 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:46:03.938420   10170 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:62:e9:d7:a6:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/disk.qcow2
	I0719 07:46:03.940059   10170 main.go:141] libmachine: STDOUT: 
	I0719 07:46:03.940075   10170 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:46:03.940087   10170 client.go:171] duration metric: took 198.494375ms to LocalClient.Create
	I0719 07:46:05.942269   10170 start.go:128] duration metric: took 2.256081958s to createHost
	I0719 07:46:05.942353   10170 start.go:83] releasing machines lock for "newest-cni-038000", held for 2.256631458s
	W0719 07:46:05.942804   10170 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-038000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-038000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:46:05.952376   10170 out.go:177] 
	W0719 07:46:05.957386   10170 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:46:05.957515   10170 out.go:239] * 
	* 
	W0719 07:46:05.960284   10170 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:46:05.975740   10170 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-038000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-038000 -n newest-cni-038000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-038000 -n newest-cni-038000: exit status 7 (66.262208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-038000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-109000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-109000 -n default-k8s-diff-port-109000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-109000 -n default-k8s-diff-port-109000: exit status 7 (31.829584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-109000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-109000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-109000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-109000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.676625ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-109000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-109000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-109000 -n default-k8s-diff-port-109000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-109000 -n default-k8s-diff-port-109000: exit status 7 (28.848125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-109000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-109000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-109000 -n default-k8s-diff-port-109000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-109000 -n default-k8s-diff-port-109000: exit status 7 (28.205291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-109000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-109000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-109000 --alsologtostderr -v=1: exit status 83 (41.3415ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-109000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-109000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:45:59.052830   10192 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:45:59.052977   10192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:59.052980   10192 out.go:304] Setting ErrFile to fd 2...
	I0719 07:45:59.052983   10192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:45:59.053110   10192 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:45:59.053332   10192 out.go:298] Setting JSON to false
	I0719 07:45:59.053338   10192 mustload.go:65] Loading cluster: default-k8s-diff-port-109000
	I0719 07:45:59.053514   10192 config.go:182] Loaded profile config "default-k8s-diff-port-109000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:45:59.057232   10192 out.go:177] * The control-plane node default-k8s-diff-port-109000 host is not running: state=Stopped
	I0719 07:45:59.061234   10192 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-109000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-109000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-109000 -n default-k8s-diff-port-109000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-109000 -n default-k8s-diff-port-109000: exit status 7 (29.003458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-109000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-109000 -n default-k8s-diff-port-109000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-109000 -n default-k8s-diff-port-109000: exit status 7 (28.95925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-109000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-038000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-038000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.188730875s)

                                                
                                                
-- stdout --
	* [newest-cni-038000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-038000" primary control-plane node in "newest-cni-038000" cluster
	* Restarting existing qemu2 VM for "newest-cni-038000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-038000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:46:09.733568   10241 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:46:09.733701   10241 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:46:09.733704   10241 out.go:304] Setting ErrFile to fd 2...
	I0719 07:46:09.733706   10241 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:46:09.733828   10241 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:46:09.734824   10241 out.go:298] Setting JSON to false
	I0719 07:46:09.750951   10241 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6338,"bootTime":1721394031,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:46:09.751011   10241 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:46:09.755955   10241 out.go:177] * [newest-cni-038000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:46:09.763002   10241 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:46:09.763066   10241 notify.go:220] Checking for updates...
	I0719 07:46:09.770912   10241 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:46:09.773997   10241 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:46:09.776948   10241 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:46:09.779913   10241 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:46:09.782992   10241 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:46:09.786159   10241 config.go:182] Loaded profile config "newest-cni-038000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0719 07:46:09.786442   10241 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:46:09.790985   10241 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 07:46:09.797948   10241 start.go:297] selected driver: qemu2
	I0719 07:46:09.797953   10241 start.go:901] validating driver "qemu2" against &{Name:newest-cni-038000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-038000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:46:09.797999   10241 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:46:09.800573   10241 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0719 07:46:09.800611   10241 cni.go:84] Creating CNI manager for ""
	I0719 07:46:09.800618   10241 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:46:09.800644   10241 start.go:340] cluster config:
	{Name:newest-cni-038000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-038000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:46:09.804349   10241 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:46:09.811787   10241 out.go:177] * Starting "newest-cni-038000" primary control-plane node in "newest-cni-038000" cluster
	I0719 07:46:09.815928   10241 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 07:46:09.815948   10241 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0719 07:46:09.815959   10241 cache.go:56] Caching tarball of preloaded images
	I0719 07:46:09.816013   10241 preload.go:172] Found /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 07:46:09.816019   10241 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0719 07:46:09.816073   10241 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/newest-cni-038000/config.json ...
	I0719 07:46:09.816449   10241 start.go:360] acquireMachinesLock for newest-cni-038000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:46:09.816480   10241 start.go:364] duration metric: took 24.458µs to acquireMachinesLock for "newest-cni-038000"
	I0719 07:46:09.816488   10241 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:46:09.816495   10241 fix.go:54] fixHost starting: 
	I0719 07:46:09.816615   10241 fix.go:112] recreateIfNeeded on newest-cni-038000: state=Stopped err=<nil>
	W0719 07:46:09.816624   10241 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:46:09.820849   10241 out.go:177] * Restarting existing qemu2 VM for "newest-cni-038000" ...
	I0719 07:46:09.828911   10241 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:46:09.828950   10241 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:62:e9:d7:a6:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/disk.qcow2
	I0719 07:46:09.831182   10241 main.go:141] libmachine: STDOUT: 
	I0719 07:46:09.831204   10241 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:46:09.831234   10241 fix.go:56] duration metric: took 14.739042ms for fixHost
	I0719 07:46:09.831238   10241 start.go:83] releasing machines lock for "newest-cni-038000", held for 14.754084ms
	W0719 07:46:09.831251   10241 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:46:09.831295   10241 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:46:09.831300   10241 start.go:729] Will try again in 5 seconds ...
	I0719 07:46:14.833462   10241 start.go:360] acquireMachinesLock for newest-cni-038000: {Name:mkb1bd00a8f90b3715494b0ff96f5e67fab3ab12 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 07:46:14.833867   10241 start.go:364] duration metric: took 323.125µs to acquireMachinesLock for "newest-cni-038000"
	I0719 07:46:14.834003   10241 start.go:96] Skipping create...Using existing machine configuration
	I0719 07:46:14.834023   10241 fix.go:54] fixHost starting: 
	I0719 07:46:14.834696   10241 fix.go:112] recreateIfNeeded on newest-cni-038000: state=Stopped err=<nil>
	W0719 07:46:14.834725   10241 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 07:46:14.844263   10241 out.go:177] * Restarting existing qemu2 VM for "newest-cni-038000" ...
	I0719 07:46:14.848214   10241 qemu.go:418] Using hvf for hardware acceleration
	I0719 07:46:14.848405   10241 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:62:e9:d7:a6:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19302-5980/.minikube/machines/newest-cni-038000/disk.qcow2
	I0719 07:46:14.857311   10241 main.go:141] libmachine: STDOUT: 
	I0719 07:46:14.857363   10241 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 07:46:14.857434   10241 fix.go:56] duration metric: took 23.41225ms for fixHost
	I0719 07:46:14.857448   10241 start.go:83] releasing machines lock for "newest-cni-038000", held for 23.557916ms
	W0719 07:46:14.857579   10241 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-038000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-038000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 07:46:14.865252   10241 out.go:177] 
	W0719 07:46:14.869229   10241 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 07:46:14.869249   10241 out.go:239] * 
	* 
	W0719 07:46:14.871702   10241 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:46:14.880241   10241 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-038000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-038000 -n newest-cni-038000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-038000 -n newest-cni-038000: exit status 7 (67.285167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-038000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-038000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-038000 -n newest-cni-038000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-038000 -n newest-cni-038000: exit status 7 (29.18025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-038000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-038000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-038000 --alsologtostderr -v=1: exit status 83 (42.344417ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-038000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-038000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:46:15.064385   10255 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:46:15.064529   10255 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:46:15.064532   10255 out.go:304] Setting ErrFile to fd 2...
	I0719 07:46:15.064535   10255 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:46:15.064687   10255 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:46:15.064929   10255 out.go:298] Setting JSON to false
	I0719 07:46:15.064935   10255 mustload.go:65] Loading cluster: newest-cni-038000
	I0719 07:46:15.065141   10255 config.go:182] Loaded profile config "newest-cni-038000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0719 07:46:15.069337   10255 out.go:177] * The control-plane node newest-cni-038000 host is not running: state=Stopped
	I0719 07:46:15.073309   10255 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-038000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-038000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-038000 -n newest-cni-038000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-038000 -n newest-cni-038000: exit status 7 (29.510542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-038000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-038000 -n newest-cni-038000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-038000 -n newest-cni-038000: exit status 7 (29.779583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-038000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 12.29
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 17.85
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.11
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.29
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 10.55
48 TestErrorSpam/start 0.38
49 TestErrorSpam/status 0.09
50 TestErrorSpam/pause 0.12
51 TestErrorSpam/unpause 0.11
52 TestErrorSpam/stop 10.27
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 1.7
64 TestFunctional/serial/CacheCmd/cache/add_local 1.05
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.03
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.23
80 TestFunctional/parallel/DryRun 0.27
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.09
102 TestFunctional/parallel/License 0.2
103 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/ImageCommands/Setup 1.79
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.09
135 TestFunctional/parallel/ProfileCmd/profile_list 0.08
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_echo-server_images 0.07
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 1.98
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.2
202 TestMainNoArgs 0.03
249 TestStoppedBinaryUpgrade/Setup 0.96
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
266 TestNoKubernetes/serial/ProfileList 31.34
267 TestNoKubernetes/serial/Stop 2.03
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
283 TestStartStop/group/old-k8s-version/serial/Stop 3.32
284 TestStoppedBinaryUpgrade/MinikubeLogs 0.64
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
295 TestStartStop/group/no-preload/serial/Stop 3.4
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.11
308 TestStartStop/group/embed-certs/serial/Stop 3.82
309 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
313 TestStartStop/group/default-k8s-diff-port/serial/Stop 2.69
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
328 TestStartStop/group/newest-cni/serial/Stop 3.46
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-549000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-549000: exit status 85 (94.959333ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-549000 | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT |          |
	|         | -p download-only-549000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 07:19:30
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 07:19:30.806428    6475 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:19:30.806578    6475 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:19:30.806581    6475 out.go:304] Setting ErrFile to fd 2...
	I0719 07:19:30.806584    6475 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:19:30.806701    6475 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	W0719 07:19:30.806776    6475 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19302-5980/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19302-5980/.minikube/config/config.json: no such file or directory
	I0719 07:19:30.808184    6475 out.go:298] Setting JSON to true
	I0719 07:19:30.825868    6475 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4739,"bootTime":1721394031,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:19:30.825941    6475 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:19:30.830661    6475 out.go:97] [download-only-549000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:19:30.830846    6475 notify.go:220] Checking for updates...
	W0719 07:19:30.830895    6475 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball: no such file or directory
	I0719 07:19:30.835623    6475 out.go:169] MINIKUBE_LOCATION=19302
	I0719 07:19:30.839228    6475 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:19:30.845238    6475 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:19:30.848236    6475 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:19:30.851489    6475 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	W0719 07:19:30.859677    6475 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 07:19:30.859943    6475 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:19:30.863469    6475 out.go:97] Using the qemu2 driver based on user configuration
	I0719 07:19:30.863497    6475 start.go:297] selected driver: qemu2
	I0719 07:19:30.863513    6475 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:19:30.863588    6475 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:19:30.865677    6475 out.go:169] Automatically selected the socket_vmnet network
	I0719 07:19:30.871524    6475 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0719 07:19:30.871619    6475 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 07:19:30.871690    6475 cni.go:84] Creating CNI manager for ""
	I0719 07:19:30.871707    6475 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 07:19:30.871788    6475 start.go:340] cluster config:
	{Name:download-only-549000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:19:30.875475    6475 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:19:30.880101    6475 out.go:97] Downloading VM boot image ...
	I0719 07:19:30.880128    6475 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso
	I0719 07:19:37.351861    6475 out.go:97] Starting "download-only-549000" primary control-plane node in "download-only-549000" cluster
	I0719 07:19:37.351911    6475 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 07:19:37.411824    6475 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0719 07:19:37.411844    6475 cache.go:56] Caching tarball of preloaded images
	I0719 07:19:37.412031    6475 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 07:19:37.416221    6475 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0719 07:19:37.416228    6475 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 07:19:37.501051    6475 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0719 07:19:50.086542    6475 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 07:19:50.086707    6475 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 07:19:50.782351    6475 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0719 07:19:50.782564    6475 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/download-only-549000/config.json ...
	I0719 07:19:50.782597    6475 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/download-only-549000/config.json: {Name:mk80651b58e82497ca1ac3cf10697acde6242843 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:19:50.782839    6475 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 07:19:50.783695    6475 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0719 07:19:51.131974    6475 out.go:169] 
	W0719 07:19:51.135951    6475 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19302-5980/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108881a60 0x108881a60 0x108881a60 0x108881a60 0x108881a60 0x108881a60 0x108881a60] Decompressors:map[bz2:0x14000704270 gz:0x14000704278 tar:0x14000704220 tar.bz2:0x14000704230 tar.gz:0x14000704240 tar.xz:0x14000704250 tar.zst:0x14000704260 tbz2:0x14000704230 tgz:0x14000704240 txz:0x14000704250 tzst:0x14000704260 xz:0x14000704280 zip:0x14000704290 zst:0x14000704288] Getters:map[file:0x140008887d0 http:0x1400087a410 https:0x1400087a460] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0719 07:19:51.135976    6475 out_reason.go:110] 
	W0719 07:19:51.143855    6475 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 07:19:51.147873    6475 out.go:169] 
	
	
	* The control-plane node download-only-549000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-549000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-549000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (12.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-746000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-746000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (12.287696875s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (12.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-746000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-746000: exit status 85 (76.874042ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-549000 | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT |                     |
	|         | -p download-only-549000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT | 19 Jul 24 07:19 PDT |
	| delete  | -p download-only-549000        | download-only-549000 | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT | 19 Jul 24 07:19 PDT |
	| start   | -o=json --download-only        | download-only-746000 | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT |                     |
	|         | -p download-only-746000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 07:19:51
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 07:19:51.562754    6503 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:19:51.562900    6503 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:19:51.562903    6503 out.go:304] Setting ErrFile to fd 2...
	I0719 07:19:51.562905    6503 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:19:51.563023    6503 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:19:51.564141    6503 out.go:298] Setting JSON to true
	I0719 07:19:51.580587    6503 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4760,"bootTime":1721394031,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:19:51.580657    6503 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:19:51.584765    6503 out.go:97] [download-only-746000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:19:51.584871    6503 notify.go:220] Checking for updates...
	I0719 07:19:51.588916    6503 out.go:169] MINIKUBE_LOCATION=19302
	I0719 07:19:51.591937    6503 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:19:51.595880    6503 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:19:51.598891    6503 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:19:51.601905    6503 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	W0719 07:19:51.607829    6503 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 07:19:51.607956    6503 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:19:51.610888    6503 out.go:97] Using the qemu2 driver based on user configuration
	I0719 07:19:51.610900    6503 start.go:297] selected driver: qemu2
	I0719 07:19:51.610904    6503 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:19:51.610963    6503 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:19:51.612244    6503 out.go:169] Automatically selected the socket_vmnet network
	I0719 07:19:51.617095    6503 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0719 07:19:51.617185    6503 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 07:19:51.617200    6503 cni.go:84] Creating CNI manager for ""
	I0719 07:19:51.617211    6503 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:19:51.617221    6503 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 07:19:51.617259    6503 start.go:340] cluster config:
	{Name:download-only-746000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-746000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:19:51.620985    6503 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:19:51.623895    6503 out.go:97] Starting "download-only-746000" primary control-plane node in "download-only-746000" cluster
	I0719 07:19:51.623902    6503 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:19:51.682030    6503 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:19:51.682045    6503 cache.go:56] Caching tarball of preloaded images
	I0719 07:19:51.682203    6503 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:19:51.687392    6503 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0719 07:19:51.687400    6503 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0719 07:19:51.773396    6503 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0719 07:19:59.335293    6503 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0719 07:19:59.335458    6503 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0719 07:19:59.878324    6503 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 07:19:59.878514    6503 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/download-only-746000/config.json ...
	I0719 07:19:59.878534    6503 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/download-only-746000/config.json: {Name:mk6f19a165cc0a7cb22444b2366320c58539687c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:19:59.878762    6503 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 07:19:59.878879    6503 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/darwin/arm64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-746000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-746000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-746000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (17.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-899000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-899000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (17.849613833s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (17.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-899000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-899000: exit status 85 (76.627625ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-549000 | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT |                     |
	|         | -p download-only-549000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT | 19 Jul 24 07:19 PDT |
	| delete  | -p download-only-549000             | download-only-549000 | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT | 19 Jul 24 07:19 PDT |
	| start   | -o=json --download-only             | download-only-746000 | jenkins | v1.33.1 | 19 Jul 24 07:19 PDT |                     |
	|         | -p download-only-746000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
	| delete  | -p download-only-746000             | download-only-746000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT | 19 Jul 24 07:20 PDT |
	| start   | -o=json --download-only             | download-only-899000 | jenkins | v1.33.1 | 19 Jul 24 07:20 PDT |                     |
	|         | -p download-only-899000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 07:20:04
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 07:20:04.139473    6525 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:20:04.139602    6525 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:20:04.139605    6525 out.go:304] Setting ErrFile to fd 2...
	I0719 07:20:04.139608    6525 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:20:04.139739    6525 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:20:04.140739    6525 out.go:298] Setting JSON to true
	I0719 07:20:04.157022    6525 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4773,"bootTime":1721394031,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:20:04.157089    6525 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:20:04.161958    6525 out.go:97] [download-only-899000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:20:04.162081    6525 notify.go:220] Checking for updates...
	I0719 07:20:04.166009    6525 out.go:169] MINIKUBE_LOCATION=19302
	I0719 07:20:04.170022    6525 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:20:04.174016    6525 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:20:04.176991    6525 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:20:04.180024    6525 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	W0719 07:20:04.185998    6525 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 07:20:04.186168    6525 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:20:04.188937    6525 out.go:97] Using the qemu2 driver based on user configuration
	I0719 07:20:04.188946    6525 start.go:297] selected driver: qemu2
	I0719 07:20:04.188949    6525 start.go:901] validating driver "qemu2" against <nil>
	I0719 07:20:04.188997    6525 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 07:20:04.191946    6525 out.go:169] Automatically selected the socket_vmnet network
	I0719 07:20:04.196974    6525 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0719 07:20:04.197060    6525 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 07:20:04.197077    6525 cni.go:84] Creating CNI manager for ""
	I0719 07:20:04.197086    6525 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 07:20:04.197095    6525 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 07:20:04.197130    6525 start.go:340] cluster config:
	{Name:download-only-899000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-899000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:20:04.200535    6525 iso.go:125] acquiring lock: {Name:mkb591f3ae16b351d87673723de76e5dfe8a040b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 07:20:04.203993    6525 out.go:97] Starting "download-only-899000" primary control-plane node in "download-only-899000" cluster
	I0719 07:20:04.204000    6525 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 07:20:04.265153    6525 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0719 07:20:04.265177    6525 cache.go:56] Caching tarball of preloaded images
	I0719 07:20:04.265368    6525 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 07:20:04.269523    6525 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0719 07:20:04.269530    6525 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 07:20:04.373046    6525 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0719 07:20:12.547693    6525 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 07:20:12.547861    6525 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 07:20:13.066204    6525 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0719 07:20:13.066396    6525 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/download-only-899000/config.json ...
	I0719 07:20:13.066414    6525 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-5980/.minikube/profiles/download-only-899000/config.json: {Name:mk15e985969fa093e66cf883597922228054be72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 07:20:13.066667    6525 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 07:20:13.066796    6525 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19302-5980/.minikube/cache/darwin/arm64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-899000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-899000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-899000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.29s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-997000 --alsologtostderr --binary-mirror http://127.0.0.1:50963 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-997000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-997000
--- PASS: TestBinaryMirror (0.29s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-047000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-047000: exit status 85 (55.737292ms)

                                                
                                                
-- stdout --
	* Profile "addons-047000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-047000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-047000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-047000: exit status 85 (59.586209ms)

                                                
                                                
-- stdout --
	* Profile "addons-047000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-047000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.55s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.55s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 status: exit status 7 (30.200791ms)

                                                
                                                
-- stdout --
	nospam-848000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 status: exit status 7 (29.731541ms)

                                                
                                                
-- stdout --
	nospam-848000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 status: exit status 7 (29.229917ms)

                                                
                                                
-- stdout --
	nospam-848000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 pause: exit status 83 (39.901166ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-848000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-848000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 pause: exit status 83 (39.868708ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-848000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-848000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 pause: exit status 83 (40.818041ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-848000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-848000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.11s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 unpause: exit status 83 (36.952791ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-848000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-848000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 unpause: exit status 83 (38.6545ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-848000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-848000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 unpause: exit status 83 (38.75625ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-848000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-848000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.11s)

                                                
                                    
x
+
TestErrorSpam/stop (10.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 stop: (3.321446291s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 stop: (3.411006417s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-848000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-848000 stop: (3.537116208s)
--- PASS: TestErrorSpam/stop (10.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19302-5980/.minikube/files/etc/test/nested/copy/6473/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-971000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local369842568/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 cache add minikube-local-cache-test:functional-971000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 cache delete minikube-local-cache-test:functional-971000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-971000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 config get cpus: exit status 14 (29.289583ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 config get cpus: exit status 14 (36.32075ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-971000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-971000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (157.255083ms)

                                                
                                                
-- stdout --
	* [functional-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:21:57.780935    7105 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:21:57.781116    7105 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:21:57.781121    7105 out.go:304] Setting ErrFile to fd 2...
	I0719 07:21:57.781124    7105 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:21:57.781287    7105 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:21:57.782548    7105 out.go:298] Setting JSON to false
	I0719 07:21:57.802595    7105 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4886,"bootTime":1721394031,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:21:57.802663    7105 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:21:57.806545    7105 out.go:177] * [functional-971000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0719 07:21:57.813535    7105 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:21:57.813572    7105 notify.go:220] Checking for updates...
	I0719 07:21:57.820491    7105 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:21:57.823356    7105 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:21:57.826471    7105 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:21:57.829500    7105 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:21:57.830573    7105 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:21:57.833816    7105 config.go:182] Loaded profile config "functional-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:21:57.834110    7105 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:21:57.838420    7105 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 07:21:57.843467    7105 start.go:297] selected driver: qemu2
	I0719 07:21:57.843475    7105 start.go:901] validating driver "qemu2" against &{Name:functional-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-971000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:21:57.843544    7105 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:21:57.850496    7105 out.go:177] 
	W0719 07:21:57.854436    7105 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0719 07:21:57.858476    7105 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-971000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-971000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-971000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (107.942166ms)

                                                
                                                
-- stdout --
	* [functional-971000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 07:21:58.006605    7116 out.go:291] Setting OutFile to fd 1 ...
	I0719 07:21:58.006709    7116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:21:58.006713    7116 out.go:304] Setting ErrFile to fd 2...
	I0719 07:21:58.006715    7116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 07:21:58.006839    7116 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-5980/.minikube/bin
	I0719 07:21:58.008186    7116 out.go:298] Setting JSON to false
	I0719 07:21:58.025027    7116 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4887,"bootTime":1721394031,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 07:21:58.025117    7116 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 07:21:58.028565    7116 out.go:177] * [functional-971000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0719 07:21:58.033471    7116 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 07:21:58.033533    7116 notify.go:220] Checking for updates...
	I0719 07:21:58.040424    7116 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	I0719 07:21:58.043514    7116 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 07:21:58.046452    7116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 07:21:58.047738    7116 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	I0719 07:21:58.050450    7116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 07:21:58.053814    7116 config.go:182] Loaded profile config "functional-971000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 07:21:58.054040    7116 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 07:21:58.060309    7116 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0719 07:21:58.067469    7116 start.go:297] selected driver: qemu2
	I0719 07:21:58.067476    7116 start.go:901] validating driver "qemu2" against &{Name:functional-971000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-971000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 07:21:58.067556    7116 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 07:21:58.074439    7116 out.go:177] 
	W0719 07:21:58.078443    7116 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0719 07:21:58.082487    7116 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.756869875s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-971000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-971000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 image rm docker.io/kicbase/echo-server:functional-971000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-971000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 image save --daemon docker.io/kicbase/echo-server:functional-971000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-971000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "48.737333ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "33.88675ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "46.093958ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "32.668708ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.011974208s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-971000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-971000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-971000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-971000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.98s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-120000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-120000 --output=json --user=testUser: (1.977803959s)
--- PASS: TestJSONOutput/stop/Command (1.98s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-804000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-804000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.789792ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dbe7d518-de23-446b-8ab0-c287ac5f8307","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-804000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b062d599-68c2-4f5f-9813-5c67eaa4ac06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19302"}}
	{"specversion":"1.0","id":"c14b77eb-e0f7-4079-b6c5-4abe7e86c9b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig"}}
	{"specversion":"1.0","id":"26a9b09b-78ca-40d0-96c5-a3049325323c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b2218cb3-8bf0-41a9-a130-a72e2fe0a017","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6eb7c4ec-347a-414a-90ac-2ec2af4da707","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube"}}
	{"specversion":"1.0","id":"028e9a5d-ac3f-4899-a68d-078ee07902d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"af7c9fa2-8594-4309-a18d-9a82f17c7518","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-804000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-804000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-854000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-854000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (93.895833ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-854000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-5980/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-5980/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-854000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-854000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.564209ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-854000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-854000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.714527708s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.628529542s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-854000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-854000: (2.033535792s)
--- PASS: TestNoKubernetes/serial/Stop (2.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-854000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-854000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.875167ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-854000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-854000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-572000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-572000 --alsologtostderr -v=3: (3.322835875s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-109000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-572000 -n old-k8s-version-572000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-572000 -n old-k8s-version-572000: exit status 7 (32.512083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-572000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-626000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-626000 --alsologtostderr -v=3: (3.403404417s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-626000 -n no-preload-626000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-626000 -n no-preload-626000: exit status 7 (48.874875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-626000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-120000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-120000 --alsologtostderr -v=3: (3.821002666s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-120000 -n embed-certs-120000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-120000 -n embed-certs-120000: exit status 7 (58.577167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-120000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-109000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-109000 --alsologtostderr -v=3: (2.691937083s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (2.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-109000 -n default-k8s-diff-port-109000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-109000 -n default-k8s-diff-port-109000: exit status 7 (56.548916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-109000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-038000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-038000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-038000 --alsologtostderr -v=3: (3.464404542s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-038000 -n newest-cni-038000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-038000 -n newest-cni-038000: exit status 7 (60.080625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-038000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-971000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port4044032049/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721398884352241000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port4044032049/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721398884352241000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port4044032049/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721398884352241000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port4044032049/001/test-1721398884352241000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (56.027917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.9255ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.402125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.009125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.603208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.4575ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.939958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "sudo umount -f /mount-9p": exit status 83 (46.605042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-971000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-971000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port4044032049/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (10.82s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (10.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-971000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port356213687/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (62.327875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.572542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.221541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.148333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.679458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.50625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.164333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "sudo umount -f /mount-9p": exit status 83 (46.395291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-971000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-971000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port356213687/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (10.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (12.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-971000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3756300799/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-971000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3756300799/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-971000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3756300799/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T" /mount1: exit status 83 (72.605208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T" /mount1: exit status 83 (83.249083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T" /mount1: exit status 83 (63.755375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T" /mount1: exit status 83 (85.316375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T" /mount1: exit status 83 (88.862958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T" /mount1: exit status 83 (88.21225ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-971000 ssh "findmnt -T" /mount1: exit status 83 (85.929667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-971000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-971000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-971000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3756300799/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-971000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3756300799/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-971000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3756300799/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (12.25s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-047000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-047000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-047000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-047000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-047000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-047000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-047000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-047000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-047000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-047000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-047000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-047000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-047000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-047000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-047000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-047000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-047000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-047000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-047000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-047000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-047000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-047000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-047000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-047000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-047000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-047000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-047000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-047000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-047000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-047000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-047000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-047000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-047000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-047000"

                                                
                                                
----------------------- debugLogs end: cilium-047000 [took: 2.171014s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-047000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-047000
--- SKIP: TestNetworkPlugins/group/cilium (2.28s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-259000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-259000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard