Test Report: QEMU_macOS 19046

                    
                      fb148a11d8032b35b0d9cd6893af3c5921ed4428:2024-06-10:34835
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 19.66
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.08
27 TestAddons/Setup 9.93
28 TestCertOptions 10.11
29 TestCertExpiration 195.43
30 TestDockerFlags 10.26
31 TestForceSystemdFlag 11.31
32 TestForceSystemdEnv 10.03
38 TestErrorSpam/setup 10.16
47 TestFunctional/serial/StartWithProxy 9.96
49 TestFunctional/serial/SoftStart 5.26
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.15
61 TestFunctional/serial/MinikubeKubectlCmd 0.63
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.96
63 TestFunctional/serial/ExtraConfig 5.25
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.2
73 TestFunctional/parallel/StatusCmd 0.12
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.12
82 TestFunctional/parallel/CpCmd 0.28
84 TestFunctional/parallel/FileSync 0.07
85 TestFunctional/parallel/CertSync 0.29
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
95 TestFunctional/parallel/Version/components 0.04
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
102 TestFunctional/parallel/DockerEnv/bash 0.05
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.05
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.05
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
110 TestFunctional/parallel/ServiceCmd/Format 0.04
111 TestFunctional/parallel/ServiceCmd/URL 0.04
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.07
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 111.23
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.36
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.35
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.43
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 37.23
141 TestMultiControlPlane/serial/StartCluster 10.06
142 TestMultiControlPlane/serial/DeployApp 113.16
143 TestMultiControlPlane/serial/PingHostFromPods 0.09
144 TestMultiControlPlane/serial/AddWorkerNode 0.07
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.1
147 TestMultiControlPlane/serial/CopyFile 0.06
148 TestMultiControlPlane/serial/StopSecondaryNode 0.11
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.1
150 TestMultiControlPlane/serial/RestartSecondaryNode 50.48
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.1
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.41
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.1
155 TestMultiControlPlane/serial/StopCluster 1.95
156 TestMultiControlPlane/serial/RestartCluster 5.25
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.1
158 TestMultiControlPlane/serial/AddSecondaryNode 0.07
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.1
162 TestImageBuild/serial/Setup 10
165 TestJSONOutput/start/Command 9.79
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.04
194 TestMinikubeProfile 10.35
197 TestMountStart/serial/StartWithMountFirst 10.13
200 TestMultiNode/serial/FreshStart2Nodes 9.92
201 TestMultiNode/serial/DeployApp2Nodes 109.41
202 TestMultiNode/serial/PingHostFrom2Pods 0.09
203 TestMultiNode/serial/AddNode 0.07
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.1
206 TestMultiNode/serial/CopyFile 0.06
207 TestMultiNode/serial/StopNode 0.14
208 TestMultiNode/serial/StartAfterStop 49.22
209 TestMultiNode/serial/RestartKeepsNodes 8.54
210 TestMultiNode/serial/DeleteNode 0.11
211 TestMultiNode/serial/StopMultiNode 2
212 TestMultiNode/serial/RestartMultiNode 5.25
213 TestMultiNode/serial/ValidateNameConflict 19.96
217 TestPreload 10.09
219 TestScheduledStopUnix 10.04
220 TestSkaffold 13.47
223 TestRunningBinaryUpgrade 599.81
225 TestKubernetesUpgrade 18.29
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.01
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1
241 TestStoppedBinaryUpgrade/Upgrade 577.63
243 TestPause/serial/Start 9.98
253 TestNoKubernetes/serial/StartWithK8s 9.88
254 TestNoKubernetes/serial/StartWithStopK8s 5.44
255 TestNoKubernetes/serial/Start 5.42
259 TestNoKubernetes/serial/StartNoArgs 5.47
261 TestNetworkPlugins/group/auto/Start 10.01
262 TestNetworkPlugins/group/kindnet/Start 9.83
263 TestNetworkPlugins/group/calico/Start 10
264 TestNetworkPlugins/group/custom-flannel/Start 9.9
265 TestNetworkPlugins/group/false/Start 9.79
266 TestNetworkPlugins/group/enable-default-cni/Start 10.02
267 TestNetworkPlugins/group/flannel/Start 9.74
268 TestNetworkPlugins/group/bridge/Start 9.88
269 TestNetworkPlugins/group/kubenet/Start 9.76
272 TestStartStop/group/old-k8s-version/serial/FirstStart 10.14
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.21
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
281 TestStartStop/group/old-k8s-version/serial/Pause 0.1
283 TestStartStop/group/no-preload/serial/FirstStart 9.9
284 TestStartStop/group/no-preload/serial/DeployApp 0.09
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
288 TestStartStop/group/no-preload/serial/SecondStart 5.27
290 TestStartStop/group/embed-certs/serial/FirstStart 10.33
291 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
292 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
293 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
294 TestStartStop/group/no-preload/serial/Pause 0.1
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.99
297 TestStartStop/group/embed-certs/serial/DeployApp 0.09
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
301 TestStartStop/group/embed-certs/serial/SecondStart 6.3
302 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
306 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.86
307 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
308 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
309 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
310 TestStartStop/group/embed-certs/serial/Pause 0.1
312 TestStartStop/group/newest-cni/serial/FirstStart 9.86
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
314 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
315 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
316 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
321 TestStartStop/group/newest-cni/serial/SecondStart 5.25
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (19.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-625000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-625000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (19.659723542s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"952177ab-762c-4c01-a60d-3b732eacf6ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-625000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"220e4ba3-7fc5-4c20-9122-0790cae2f75a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19046"}}
	{"specversion":"1.0","id":"99e37a02-3c84-4473-9991-8659ca38366c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig"}}
	{"specversion":"1.0","id":"15f2d2cb-8533-419c-a658-7061ec9a4b1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4ea7e92a-b1f2-4dfe-95c5-6b80683ca6aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"043efab7-ab12-4120-8ed8-fd471b5a56ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube"}}
	{"specversion":"1.0","id":"73c12b12-2fcb-45f0-bf03-a174194d2465","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"81fb2af7-e58a-4e1f-bc67-2fdd542cf116","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"161a5083-1437-4bde-9019-5456c0f1c25f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"c54ff055-86db-4ddd-bd6f-578d4cf1fd6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b46306e4-cc29-406c-9bc5-4787b2854f23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-625000\" primary control-plane node in \"download-only-625000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e9c1c973-8342-46f1-a728-7545f0e05fe9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c9bea4fa-b940-4d67-a249-ad5ec2aec577","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108ced900 0x108ced900 0x108ced900 0x108ced900 0x108ced900 0x108ced900 0x108ced900] Decompressors:map[bz2:0x1400000fd70 gz:0x1400000fd78 tar:0x1400000fcc0 tar.bz2:0x1400000fce0 tar.gz:0x1400000fd20 tar.xz:0x1400000fd30 tar.zst:0x1400000fd60 tbz2:0x1400000fce0 tgz:0x14
00000fd20 txz:0x1400000fd30 tzst:0x1400000fd60 xz:0x1400000fd80 zip:0x1400000fdb0 zst:0x1400000fd88] Getters:map[file:0x140016c45c0 http:0x140005a2230 https:0x140005a2280] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"d7313b04-92f9-4a37-87f8-454d42c17ccb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:19:39.444787    5689 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:19:39.444950    5689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:19:39.444954    5689 out.go:304] Setting ErrFile to fd 2...
	I0610 03:19:39.444956    5689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:19:39.445105    5689 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	W0610 03:19:39.445195    5689 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19046-4812/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19046-4812/.minikube/config/config.json: no such file or directory
	I0610 03:19:39.446446    5689 out.go:298] Setting JSON to true
	I0610 03:19:39.464250    5689 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4750,"bootTime":1718010029,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:19:39.464337    5689 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:19:39.482816    5689 out.go:97] [download-only-625000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:19:39.485738    5689 out.go:169] MINIKUBE_LOCATION=19046
	I0610 03:19:39.482936    5689 notify.go:220] Checking for updates...
	W0610 03:19:39.482955    5689 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball: no such file or directory
	I0610 03:19:39.517879    5689 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:19:39.521723    5689 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:19:39.524794    5689 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:19:39.529713    5689 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	W0610 03:19:39.535721    5689 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 03:19:39.535978    5689 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:19:39.537402    5689 out.go:97] Using the qemu2 driver based on user configuration
	I0610 03:19:39.537427    5689 start.go:297] selected driver: qemu2
	I0610 03:19:39.537441    5689 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:19:39.537534    5689 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:19:39.541728    5689 out.go:169] Automatically selected the socket_vmnet network
	I0610 03:19:39.548315    5689 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0610 03:19:39.548423    5689 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 03:19:39.548486    5689 cni.go:84] Creating CNI manager for ""
	I0610 03:19:39.548503    5689 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0610 03:19:39.548556    5689 start.go:340] cluster config:
	{Name:download-only-625000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:19:39.553977    5689 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:19:39.558746    5689 out.go:97] Downloading VM boot image ...
	I0610 03:19:39.558779    5689 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso
	I0610 03:19:47.502943    5689 out.go:97] Starting "download-only-625000" primary control-plane node in "download-only-625000" cluster
	I0610 03:19:47.502967    5689 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 03:19:47.624198    5689 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0610 03:19:47.624231    5689 cache.go:56] Caching tarball of preloaded images
	I0610 03:19:47.624464    5689 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 03:19:47.628756    5689 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0610 03:19:47.628769    5689 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 03:19:47.851975    5689 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0610 03:19:57.979424    5689 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 03:19:57.979586    5689 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 03:19:58.674040    5689 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0610 03:19:58.674261    5689 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/download-only-625000/config.json ...
	I0610 03:19:58.674279    5689 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/download-only-625000/config.json: {Name:mk679a6fd591b26c67bafaaf1438ab6da55259f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:19:58.675322    5689 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 03:19:58.675518    5689 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0610 03:19:59.026578    5689 out.go:169] 
	W0610 03:19:59.031486    5689 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108ced900 0x108ced900 0x108ced900 0x108ced900 0x108ced900 0x108ced900 0x108ced900] Decompressors:map[bz2:0x1400000fd70 gz:0x1400000fd78 tar:0x1400000fcc0 tar.bz2:0x1400000fce0 tar.gz:0x1400000fd20 tar.xz:0x1400000fd30 tar.zst:0x1400000fd60 tbz2:0x1400000fce0 tgz:0x1400000fd20 txz:0x1400000fd30 tzst:0x1400000fd60 xz:0x1400000fd80 zip:0x1400000fdb0 zst:0x1400000fd88] Getters:map[file:0x140016c45c0 http:0x140005a2230 https:0x140005a2280] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0610 03:19:59.031505    5689 out_reason.go:110] 
	W0610 03:19:59.039508    5689 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:19:59.042490    5689 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-625000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (19.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.08s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-139000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-139000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.941890542s)

                                                
                                                
-- stdout --
	* [offline-docker-139000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-139000" primary control-plane node in "offline-docker-139000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-139000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:32:01.819127    7198 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:32:01.819266    7198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:32:01.819269    7198 out.go:304] Setting ErrFile to fd 2...
	I0610 03:32:01.819272    7198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:32:01.819403    7198 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:32:01.820711    7198 out.go:298] Setting JSON to false
	I0610 03:32:01.838618    7198 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5492,"bootTime":1718010029,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:32:01.838722    7198 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:32:01.841370    7198 out.go:177] * [offline-docker-139000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:32:01.849229    7198 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:32:01.852117    7198 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:32:01.849233    7198 notify.go:220] Checking for updates...
	I0610 03:32:01.858166    7198 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:32:01.861057    7198 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:32:01.864206    7198 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:32:01.867175    7198 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:32:01.868755    7198 config.go:182] Loaded profile config "multinode-763000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:32:01.868819    7198 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:32:01.873132    7198 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:32:01.880032    7198 start.go:297] selected driver: qemu2
	I0610 03:32:01.880042    7198 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:32:01.880051    7198 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:32:01.882040    7198 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:32:01.885102    7198 out.go:177] * Automatically selected the socket_vmnet network
	I0610 03:32:01.888240    7198 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:32:01.888272    7198 cni.go:84] Creating CNI manager for ""
	I0610 03:32:01.888277    7198 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:32:01.888281    7198 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 03:32:01.888332    7198 start.go:340] cluster config:
	{Name:offline-docker-139000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:offline-docker-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:32:01.893148    7198 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:32:01.900116    7198 out.go:177] * Starting "offline-docker-139000" primary control-plane node in "offline-docker-139000" cluster
	I0610 03:32:01.904106    7198 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:32:01.904138    7198 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:32:01.904147    7198 cache.go:56] Caching tarball of preloaded images
	I0610 03:32:01.904219    7198 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:32:01.904224    7198 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:32:01.904304    7198 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/offline-docker-139000/config.json ...
	I0610 03:32:01.904315    7198 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/offline-docker-139000/config.json: {Name:mkc5a874c1cca87ea2de732c3855773fe13dafc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:32:01.904622    7198 start.go:360] acquireMachinesLock for offline-docker-139000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:32:01.904655    7198 start.go:364] duration metric: took 25.5µs to acquireMachinesLock for "offline-docker-139000"
	I0610 03:32:01.904668    7198 start.go:93] Provisioning new machine with config: &{Name:offline-docker-139000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:offline-docker-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:32:01.904694    7198 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:32:01.909130    7198 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 03:32:01.924899    7198 start.go:159] libmachine.API.Create for "offline-docker-139000" (driver="qemu2")
	I0610 03:32:01.924948    7198 client.go:168] LocalClient.Create starting
	I0610 03:32:01.925028    7198 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:32:01.925064    7198 main.go:141] libmachine: Decoding PEM data...
	I0610 03:32:01.925075    7198 main.go:141] libmachine: Parsing certificate...
	I0610 03:32:01.925126    7198 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:32:01.925150    7198 main.go:141] libmachine: Decoding PEM data...
	I0610 03:32:01.925156    7198 main.go:141] libmachine: Parsing certificate...
	I0610 03:32:01.925533    7198 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:32:02.070050    7198 main.go:141] libmachine: Creating SSH key...
	I0610 03:32:02.229274    7198 main.go:141] libmachine: Creating Disk image...
	I0610 03:32:02.229284    7198 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:32:02.229501    7198 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/offline-docker-139000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/offline-docker-139000/disk.qcow2
	I0610 03:32:02.242973    7198 main.go:141] libmachine: STDOUT: 
	I0610 03:32:02.243042    7198 main.go:141] libmachine: STDERR: 
	I0610 03:32:02.243112    7198 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/offline-docker-139000/disk.qcow2 +20000M
	I0610 03:32:02.255783    7198 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:32:02.255805    7198 main.go:141] libmachine: STDERR: 
	I0610 03:32:02.255826    7198 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/offline-docker-139000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/offline-docker-139000/disk.qcow2
	I0610 03:32:02.255830    7198 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:32:02.255862    7198 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/offline-docker-139000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/offline-docker-139000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/offline-docker-139000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:fe:57:8f:38:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/offline-docker-139000/disk.qcow2
	I0610 03:32:02.257580    7198 main.go:141] libmachine: STDOUT: 
	I0610 03:32:02.257595    7198 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:32:02.257613    7198 client.go:171] duration metric: took 332.665166ms to LocalClient.Create
	I0610 03:32:04.259775    7198 start.go:128] duration metric: took 2.355097958s to createHost
	I0610 03:32:04.259798    7198 start.go:83] releasing machines lock for "offline-docker-139000", held for 2.355175791s
	W0610 03:32:04.259820    7198 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:32:04.266044    7198 out.go:177] * Deleting "offline-docker-139000" in qemu2 ...
	W0610 03:32:04.276095    7198 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:32:04.276108    7198 start.go:728] Will try again in 5 seconds ...
	I0610 03:32:09.278273    7198 start.go:360] acquireMachinesLock for offline-docker-139000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:32:09.278710    7198 start.go:364] duration metric: took 340.25µs to acquireMachinesLock for "offline-docker-139000"
	I0610 03:32:09.278857    7198 start.go:93] Provisioning new machine with config: &{Name:offline-docker-139000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:offline-docker-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:32:09.279211    7198 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:32:09.289773    7198 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 03:32:09.340047    7198 start.go:159] libmachine.API.Create for "offline-docker-139000" (driver="qemu2")
	I0610 03:32:09.340099    7198 client.go:168] LocalClient.Create starting
	I0610 03:32:09.340212    7198 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:32:09.340273    7198 main.go:141] libmachine: Decoding PEM data...
	I0610 03:32:09.340286    7198 main.go:141] libmachine: Parsing certificate...
	I0610 03:32:09.340361    7198 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:32:09.340405    7198 main.go:141] libmachine: Decoding PEM data...
	I0610 03:32:09.340416    7198 main.go:141] libmachine: Parsing certificate...
	I0610 03:32:09.341126    7198 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:32:09.495486    7198 main.go:141] libmachine: Creating SSH key...
	I0610 03:32:09.671299    7198 main.go:141] libmachine: Creating Disk image...
	I0610 03:32:09.671307    7198 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:32:09.671522    7198 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/offline-docker-139000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/offline-docker-139000/disk.qcow2
	I0610 03:32:09.684500    7198 main.go:141] libmachine: STDOUT: 
	I0610 03:32:09.684527    7198 main.go:141] libmachine: STDERR: 
	I0610 03:32:09.684585    7198 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/offline-docker-139000/disk.qcow2 +20000M
	I0610 03:32:09.695539    7198 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:32:09.695553    7198 main.go:141] libmachine: STDERR: 
	I0610 03:32:09.695569    7198 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/offline-docker-139000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/offline-docker-139000/disk.qcow2
	I0610 03:32:09.695574    7198 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:32:09.695604    7198 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/offline-docker-139000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/offline-docker-139000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/offline-docker-139000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:ba:9c:9d:d2:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/offline-docker-139000/disk.qcow2
	I0610 03:32:09.697269    7198 main.go:141] libmachine: STDOUT: 
	I0610 03:32:09.697286    7198 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:32:09.697303    7198 client.go:171] duration metric: took 357.203208ms to LocalClient.Create
	I0610 03:32:11.699377    7198 start.go:128] duration metric: took 2.420176458s to createHost
	I0610 03:32:11.699403    7198 start.go:83] releasing machines lock for "offline-docker-139000", held for 2.420714042s
	W0610 03:32:11.699518    7198 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-139000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-139000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:32:11.707763    7198 out.go:177] 
	W0610 03:32:11.711842    7198 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:32:11.711853    7198 out.go:239] * 
	* 
	W0610 03:32:11.712305    7198 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:32:11.719787    7198 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-139000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-06-10 03:32:11.731875 -0700 PDT m=+752.368864834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-139000 -n offline-docker-139000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-139000 -n offline-docker-139000: exit status 7 (33.200541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-139000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-139000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-139000
--- FAIL: TestOffline (10.08s)

                                                
                                    
x
+
TestAddons/Setup (9.93s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-028000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-028000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (9.932050875s)

                                                
                                                
-- stdout --
	* [addons-028000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-028000" primary control-plane node in "addons-028000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-028000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:20:14.250543    5798 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:20:14.250682    5798 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:20:14.250686    5798 out.go:304] Setting ErrFile to fd 2...
	I0610 03:20:14.250688    5798 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:20:14.250822    5798 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:20:14.251945    5798 out.go:298] Setting JSON to false
	I0610 03:20:14.268301    5798 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4785,"bootTime":1718010029,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:20:14.268366    5798 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:20:14.273692    5798 out.go:177] * [addons-028000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:20:14.280785    5798 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:20:14.284603    5798 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:20:14.280838    5798 notify.go:220] Checking for updates...
	I0610 03:20:14.287636    5798 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:20:14.290695    5798 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:20:14.293688    5798 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:20:14.296702    5798 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:20:14.299789    5798 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:20:14.303673    5798 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:20:14.310806    5798 start.go:297] selected driver: qemu2
	I0610 03:20:14.310810    5798 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:20:14.310815    5798 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:20:14.313005    5798 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:20:14.316638    5798 out.go:177] * Automatically selected the socket_vmnet network
	I0610 03:20:14.319758    5798 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:20:14.319806    5798 cni.go:84] Creating CNI manager for ""
	I0610 03:20:14.319816    5798 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:20:14.319820    5798 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 03:20:14.319851    5798 start.go:340] cluster config:
	{Name:addons-028000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-028000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:20:14.324433    5798 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:20:14.332586    5798 out.go:177] * Starting "addons-028000" primary control-plane node in "addons-028000" cluster
	I0610 03:20:14.336670    5798 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:20:14.336684    5798 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:20:14.336693    5798 cache.go:56] Caching tarball of preloaded images
	I0610 03:20:14.336756    5798 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:20:14.336762    5798 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:20:14.337011    5798 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/addons-028000/config.json ...
	I0610 03:20:14.337023    5798 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/addons-028000/config.json: {Name:mk5f25e142c60e46e3e02273b2b81cb19055c41e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:20:14.337427    5798 start.go:360] acquireMachinesLock for addons-028000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:20:14.337643    5798 start.go:364] duration metric: took 207.959µs to acquireMachinesLock for "addons-028000"
	I0610 03:20:14.337657    5798 start.go:93] Provisioning new machine with config: &{Name:addons-028000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:addons-028000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:20:14.337693    5798 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:20:14.346623    5798 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0610 03:20:14.365170    5798 start.go:159] libmachine.API.Create for "addons-028000" (driver="qemu2")
	I0610 03:20:14.365211    5798 client.go:168] LocalClient.Create starting
	I0610 03:20:14.365345    5798 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:20:14.425477    5798 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:20:14.466003    5798 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:20:14.647836    5798 main.go:141] libmachine: Creating SSH key...
	I0610 03:20:14.708939    5798 main.go:141] libmachine: Creating Disk image...
	I0610 03:20:14.708944    5798 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:20:14.709117    5798 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/addons-028000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/addons-028000/disk.qcow2
	I0610 03:20:14.721998    5798 main.go:141] libmachine: STDOUT: 
	I0610 03:20:14.722017    5798 main.go:141] libmachine: STDERR: 
	I0610 03:20:14.722085    5798 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/addons-028000/disk.qcow2 +20000M
	I0610 03:20:14.733187    5798 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:20:14.733205    5798 main.go:141] libmachine: STDERR: 
	I0610 03:20:14.733221    5798 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/addons-028000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/addons-028000/disk.qcow2
	I0610 03:20:14.733230    5798 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:20:14.733260    5798 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/addons-028000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/addons-028000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/addons-028000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:47:cf:62:86:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/addons-028000/disk.qcow2
	I0610 03:20:14.734921    5798 main.go:141] libmachine: STDOUT: 
	I0610 03:20:14.734936    5798 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:20:14.734956    5798 client.go:171] duration metric: took 369.745291ms to LocalClient.Create
	I0610 03:20:16.737112    5798 start.go:128] duration metric: took 2.39943875s to createHost
	I0610 03:20:16.737181    5798 start.go:83] releasing machines lock for "addons-028000", held for 2.399564833s
	W0610 03:20:16.737241    5798 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:20:16.752567    5798 out.go:177] * Deleting "addons-028000" in qemu2 ...
	W0610 03:20:16.779536    5798 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:20:16.779684    5798 start.go:728] Will try again in 5 seconds ...
	I0610 03:20:21.781862    5798 start.go:360] acquireMachinesLock for addons-028000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:20:21.782283    5798 start.go:364] duration metric: took 288.917µs to acquireMachinesLock for "addons-028000"
	I0610 03:20:21.782406    5798 start.go:93] Provisioning new machine with config: &{Name:addons-028000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:addons-028000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:20:21.782729    5798 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:20:21.791512    5798 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0610 03:20:21.841577    5798 start.go:159] libmachine.API.Create for "addons-028000" (driver="qemu2")
	I0610 03:20:21.841635    5798 client.go:168] LocalClient.Create starting
	I0610 03:20:21.841752    5798 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:20:21.841814    5798 main.go:141] libmachine: Decoding PEM data...
	I0610 03:20:21.841827    5798 main.go:141] libmachine: Parsing certificate...
	I0610 03:20:21.841947    5798 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:20:21.842005    5798 main.go:141] libmachine: Decoding PEM data...
	I0610 03:20:21.842015    5798 main.go:141] libmachine: Parsing certificate...
	I0610 03:20:21.842701    5798 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:20:22.019372    5798 main.go:141] libmachine: Creating SSH key...
	I0610 03:20:22.081976    5798 main.go:141] libmachine: Creating Disk image...
	I0610 03:20:22.081985    5798 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:20:22.082161    5798 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/addons-028000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/addons-028000/disk.qcow2
	I0610 03:20:22.094919    5798 main.go:141] libmachine: STDOUT: 
	I0610 03:20:22.094940    5798 main.go:141] libmachine: STDERR: 
	I0610 03:20:22.095003    5798 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/addons-028000/disk.qcow2 +20000M
	I0610 03:20:22.106024    5798 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:20:22.106040    5798 main.go:141] libmachine: STDERR: 
	I0610 03:20:22.106053    5798 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/addons-028000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/addons-028000/disk.qcow2
	I0610 03:20:22.106059    5798 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:20:22.106089    5798 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/addons-028000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/addons-028000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/addons-028000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:f7:99:00:86:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/addons-028000/disk.qcow2
	I0610 03:20:22.107732    5798 main.go:141] libmachine: STDOUT: 
	I0610 03:20:22.107748    5798 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:20:22.107760    5798 client.go:171] duration metric: took 266.120916ms to LocalClient.Create
	I0610 03:20:24.110004    5798 start.go:128] duration metric: took 2.327240292s to createHost
	I0610 03:20:24.110090    5798 start.go:83] releasing machines lock for "addons-028000", held for 2.327820417s
	W0610 03:20:24.110513    5798 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-028000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-028000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:20:24.119242    5798 out.go:177] 
	W0610 03:20:24.126426    5798 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:20:24.126457    5798 out.go:239] * 
	* 
	W0610 03:20:24.129285    5798 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:20:24.139284    5798 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-028000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (9.93s)

                                                
                                    
x
+
TestCertOptions (10.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-960000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-960000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.809928167s)

                                                
                                                
-- stdout --
	* [cert-options-960000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-960000" primary control-plane node in "cert-options-960000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-960000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-960000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-960000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-960000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-960000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.259458ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-960000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-960000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-960000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-960000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-960000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-960000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (42.49825ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-960000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-960000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-960000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-960000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-960000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-06-10 03:32:42.120248 -0700 PDT m=+782.757724001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-960000 -n cert-options-960000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-960000 -n cert-options-960000: exit status 7 (30.469375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-960000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-960000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-960000
--- FAIL: TestCertOptions (10.11s)

                                                
                                    
x
+
TestCertExpiration (195.43s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-032000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-032000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.030499042s)

                                                
                                                
-- stdout --
	* [cert-expiration-032000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-032000" primary control-plane node in "cert-expiration-032000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-032000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-032000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-032000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-032000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-032000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.2243245s)

                                                
                                                
-- stdout --
	* [cert-expiration-032000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-032000" primary control-plane node in "cert-expiration-032000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-032000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-032000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-032000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-032000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-032000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-032000" primary control-plane node in "cert-expiration-032000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-032000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-032000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-032000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-06-10 03:35:42.17362 -0700 PDT m=+962.813971209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-032000 -n cert-expiration-032000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-032000 -n cert-expiration-032000: exit status 7 (59.555584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-032000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-032000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-032000
--- FAIL: TestCertExpiration (195.43s)

                                                
                                    
x
+
TestDockerFlags (10.26s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-401000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-401000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.004349208s)

                                                
                                                
-- stdout --
	* [docker-flags-401000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-401000" primary control-plane node in "docker-flags-401000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-401000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:32:21.923774    7398 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:32:21.923903    7398 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:32:21.923906    7398 out.go:304] Setting ErrFile to fd 2...
	I0610 03:32:21.923908    7398 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:32:21.924034    7398 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:32:21.925120    7398 out.go:298] Setting JSON to false
	I0610 03:32:21.941358    7398 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5512,"bootTime":1718010029,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:32:21.941420    7398 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:32:21.947408    7398 out.go:177] * [docker-flags-401000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:32:21.955346    7398 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:32:21.955402    7398 notify.go:220] Checking for updates...
	I0610 03:32:21.960393    7398 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:32:21.963286    7398 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:32:21.966373    7398 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:32:21.969407    7398 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:32:21.972327    7398 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:32:21.975671    7398 config.go:182] Loaded profile config "force-systemd-flag-140000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:32:21.975740    7398 config.go:182] Loaded profile config "multinode-763000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:32:21.975786    7398 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:32:21.980390    7398 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:32:21.987299    7398 start.go:297] selected driver: qemu2
	I0610 03:32:21.987304    7398 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:32:21.987309    7398 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:32:21.989593    7398 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:32:21.993337    7398 out.go:177] * Automatically selected the socket_vmnet network
	I0610 03:32:21.996339    7398 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0610 03:32:21.996392    7398 cni.go:84] Creating CNI manager for ""
	I0610 03:32:21.996400    7398 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:32:21.996404    7398 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 03:32:21.996452    7398 start.go:340] cluster config:
	{Name:docker-flags-401000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:docker-flags-401000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:32:22.000880    7398 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:32:22.008333    7398 out.go:177] * Starting "docker-flags-401000" primary control-plane node in "docker-flags-401000" cluster
	I0610 03:32:22.012333    7398 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:32:22.012348    7398 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:32:22.012358    7398 cache.go:56] Caching tarball of preloaded images
	I0610 03:32:22.012421    7398 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:32:22.012430    7398 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:32:22.012504    7398 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/docker-flags-401000/config.json ...
	I0610 03:32:22.012520    7398 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/docker-flags-401000/config.json: {Name:mkbb26bac29b5e4ba6ab25bf295e3d3ed8e4d9e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:32:22.012732    7398 start.go:360] acquireMachinesLock for docker-flags-401000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:32:22.012768    7398 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "docker-flags-401000"
	I0610 03:32:22.012779    7398 start.go:93] Provisioning new machine with config: &{Name:docker-flags-401000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:docker-flags-401000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:32:22.012816    7398 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:32:22.021360    7398 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 03:32:22.038924    7398 start.go:159] libmachine.API.Create for "docker-flags-401000" (driver="qemu2")
	I0610 03:32:22.038958    7398 client.go:168] LocalClient.Create starting
	I0610 03:32:22.039017    7398 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:32:22.039044    7398 main.go:141] libmachine: Decoding PEM data...
	I0610 03:32:22.039057    7398 main.go:141] libmachine: Parsing certificate...
	I0610 03:32:22.039101    7398 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:32:22.039123    7398 main.go:141] libmachine: Decoding PEM data...
	I0610 03:32:22.039129    7398 main.go:141] libmachine: Parsing certificate...
	I0610 03:32:22.039540    7398 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:32:22.182777    7398 main.go:141] libmachine: Creating SSH key...
	I0610 03:32:22.331498    7398 main.go:141] libmachine: Creating Disk image...
	I0610 03:32:22.331503    7398 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:32:22.331702    7398 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/docker-flags-401000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/docker-flags-401000/disk.qcow2
	I0610 03:32:22.344471    7398 main.go:141] libmachine: STDOUT: 
	I0610 03:32:22.344491    7398 main.go:141] libmachine: STDERR: 
	I0610 03:32:22.344552    7398 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/docker-flags-401000/disk.qcow2 +20000M
	I0610 03:32:22.355352    7398 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:32:22.355367    7398 main.go:141] libmachine: STDERR: 
	I0610 03:32:22.355389    7398 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/docker-flags-401000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/docker-flags-401000/disk.qcow2
	I0610 03:32:22.355397    7398 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:32:22.355427    7398 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/docker-flags-401000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/docker-flags-401000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/docker-flags-401000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:ec:46:be:0f:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/docker-flags-401000/disk.qcow2
	I0610 03:32:22.357142    7398 main.go:141] libmachine: STDOUT: 
	I0610 03:32:22.357156    7398 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:32:22.357177    7398 client.go:171] duration metric: took 318.218375ms to LocalClient.Create
	I0610 03:32:24.359361    7398 start.go:128] duration metric: took 2.34656325s to createHost
	I0610 03:32:24.359415    7398 start.go:83] releasing machines lock for "docker-flags-401000", held for 2.346674083s
	W0610 03:32:24.359487    7398 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:32:24.380431    7398 out.go:177] * Deleting "docker-flags-401000" in qemu2 ...
	W0610 03:32:24.400229    7398 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:32:24.400253    7398 start.go:728] Will try again in 5 seconds ...
	I0610 03:32:29.402348    7398 start.go:360] acquireMachinesLock for docker-flags-401000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:32:29.499468    7398 start.go:364] duration metric: took 96.972125ms to acquireMachinesLock for "docker-flags-401000"
	I0610 03:32:29.499583    7398 start.go:93] Provisioning new machine with config: &{Name:docker-flags-401000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:docker-flags-401000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:32:29.499805    7398 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:32:29.505308    7398 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 03:32:29.556390    7398 start.go:159] libmachine.API.Create for "docker-flags-401000" (driver="qemu2")
	I0610 03:32:29.556440    7398 client.go:168] LocalClient.Create starting
	I0610 03:32:29.556552    7398 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:32:29.556616    7398 main.go:141] libmachine: Decoding PEM data...
	I0610 03:32:29.556633    7398 main.go:141] libmachine: Parsing certificate...
	I0610 03:32:29.556694    7398 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:32:29.556738    7398 main.go:141] libmachine: Decoding PEM data...
	I0610 03:32:29.556751    7398 main.go:141] libmachine: Parsing certificate...
	I0610 03:32:29.557314    7398 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:32:29.715259    7398 main.go:141] libmachine: Creating SSH key...
	I0610 03:32:29.820016    7398 main.go:141] libmachine: Creating Disk image...
	I0610 03:32:29.820022    7398 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:32:29.820197    7398 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/docker-flags-401000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/docker-flags-401000/disk.qcow2
	I0610 03:32:29.832919    7398 main.go:141] libmachine: STDOUT: 
	I0610 03:32:29.832942    7398 main.go:141] libmachine: STDERR: 
	I0610 03:32:29.833002    7398 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/docker-flags-401000/disk.qcow2 +20000M
	I0610 03:32:29.843862    7398 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:32:29.843879    7398 main.go:141] libmachine: STDERR: 
	I0610 03:32:29.843896    7398 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/docker-flags-401000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/docker-flags-401000/disk.qcow2
	I0610 03:32:29.843901    7398 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:32:29.843935    7398 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/docker-flags-401000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/docker-flags-401000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/docker-flags-401000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:10:03:44:22:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/docker-flags-401000/disk.qcow2
	I0610 03:32:29.845600    7398 main.go:141] libmachine: STDOUT: 
	I0610 03:32:29.845615    7398 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:32:29.845629    7398 client.go:171] duration metric: took 289.188625ms to LocalClient.Create
	I0610 03:32:31.847773    7398 start.go:128] duration metric: took 2.347975959s to createHost
	I0610 03:32:31.847837    7398 start.go:83] releasing machines lock for "docker-flags-401000", held for 2.348375083s
	W0610 03:32:31.848258    7398 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-401000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-401000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:32:31.860931    7398 out.go:177] 
	W0610 03:32:31.871246    7398 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:32:31.871331    7398 out.go:239] * 
	* 
	W0610 03:32:31.873961    7398 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:32:31.885883    7398 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-401000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-401000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-401000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.198375ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-401000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-401000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-401000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-401000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-401000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-401000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-401000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-401000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-401000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.530541ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-401000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-401000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-401000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-401000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-401000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-401000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-06-10 03:32:32.027865 -0700 PDT m=+772.665179709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-401000 -n docker-flags-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-401000 -n docker-flags-401000: exit status 7 (28.605208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-401000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-401000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-401000
--- FAIL: TestDockerFlags (10.26s)

                                                
                                    
x
+
TestForceSystemdFlag (11.31s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-140000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-140000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.100906541s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-140000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-140000" primary control-plane node in "force-systemd-flag-140000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-140000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:32:15.631051    7375 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:32:15.631197    7375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:32:15.631200    7375 out.go:304] Setting ErrFile to fd 2...
	I0610 03:32:15.631202    7375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:32:15.631326    7375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:32:15.632340    7375 out.go:298] Setting JSON to false
	I0610 03:32:15.648157    7375 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5506,"bootTime":1718010029,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:32:15.648217    7375 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:32:15.653471    7375 out.go:177] * [force-systemd-flag-140000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:32:15.660308    7375 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:32:15.665354    7375 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:32:15.660367    7375 notify.go:220] Checking for updates...
	I0610 03:32:15.672387    7375 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:32:15.676296    7375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:32:15.679322    7375 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:32:15.682376    7375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:32:15.685815    7375 config.go:182] Loaded profile config "force-systemd-env-550000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:32:15.685896    7375 config.go:182] Loaded profile config "multinode-763000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:32:15.685948    7375 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:32:15.690340    7375 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:32:15.697286    7375 start.go:297] selected driver: qemu2
	I0610 03:32:15.697292    7375 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:32:15.697298    7375 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:32:15.699328    7375 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:32:15.702333    7375 out.go:177] * Automatically selected the socket_vmnet network
	I0610 03:32:15.705391    7375 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 03:32:15.705418    7375 cni.go:84] Creating CNI manager for ""
	I0610 03:32:15.705423    7375 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:32:15.705426    7375 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 03:32:15.705458    7375 start.go:340] cluster config:
	{Name:force-systemd-flag-140000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-flag-140000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:32:15.709508    7375 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:32:15.716360    7375 out.go:177] * Starting "force-systemd-flag-140000" primary control-plane node in "force-systemd-flag-140000" cluster
	I0610 03:32:15.719325    7375 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:32:15.719339    7375 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:32:15.719346    7375 cache.go:56] Caching tarball of preloaded images
	I0610 03:32:15.719397    7375 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:32:15.719402    7375 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:32:15.719469    7375 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/force-systemd-flag-140000/config.json ...
	I0610 03:32:15.719478    7375 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/force-systemd-flag-140000/config.json: {Name:mkc23f4679ffaeaa0b3eb31cc6c5df0951b98254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:32:15.719678    7375 start.go:360] acquireMachinesLock for force-systemd-flag-140000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:32:15.719712    7375 start.go:364] duration metric: took 26.167µs to acquireMachinesLock for "force-systemd-flag-140000"
	I0610 03:32:15.719723    7375 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-140000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-flag-140000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:32:15.719744    7375 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:32:15.728185    7375 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 03:32:15.743337    7375 start.go:159] libmachine.API.Create for "force-systemd-flag-140000" (driver="qemu2")
	I0610 03:32:15.743371    7375 client.go:168] LocalClient.Create starting
	I0610 03:32:15.743434    7375 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:32:15.743470    7375 main.go:141] libmachine: Decoding PEM data...
	I0610 03:32:15.743480    7375 main.go:141] libmachine: Parsing certificate...
	I0610 03:32:15.743521    7375 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:32:15.743547    7375 main.go:141] libmachine: Decoding PEM data...
	I0610 03:32:15.743556    7375 main.go:141] libmachine: Parsing certificate...
	I0610 03:32:15.744014    7375 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:32:15.886459    7375 main.go:141] libmachine: Creating SSH key...
	I0610 03:32:15.964306    7375 main.go:141] libmachine: Creating Disk image...
	I0610 03:32:15.964314    7375 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:32:15.964496    7375 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-flag-140000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-flag-140000/disk.qcow2
	I0610 03:32:15.977274    7375 main.go:141] libmachine: STDOUT: 
	I0610 03:32:15.977294    7375 main.go:141] libmachine: STDERR: 
	I0610 03:32:15.977353    7375 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-flag-140000/disk.qcow2 +20000M
	I0610 03:32:15.988234    7375 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:32:15.988256    7375 main.go:141] libmachine: STDERR: 
	I0610 03:32:15.988275    7375 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-flag-140000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-flag-140000/disk.qcow2
	I0610 03:32:15.988290    7375 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:32:15.988323    7375 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-flag-140000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-flag-140000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-flag-140000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:fa:a3:b3:3d:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-flag-140000/disk.qcow2
	I0610 03:32:15.990034    7375 main.go:141] libmachine: STDOUT: 
	I0610 03:32:15.990049    7375 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:32:15.990073    7375 client.go:171] duration metric: took 246.698709ms to LocalClient.Create
	I0610 03:32:17.992228    7375 start.go:128] duration metric: took 2.272497833s to createHost
	I0610 03:32:17.992285    7375 start.go:83] releasing machines lock for "force-systemd-flag-140000", held for 2.272599041s
	W0610 03:32:17.992392    7375 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:32:18.003680    7375 out.go:177] * Deleting "force-systemd-flag-140000" in qemu2 ...
	W0610 03:32:18.035401    7375 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:32:18.035430    7375 start.go:728] Will try again in 5 seconds ...
	I0610 03:32:23.037606    7375 start.go:360] acquireMachinesLock for force-systemd-flag-140000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:32:24.359640    7375 start.go:364] duration metric: took 1.32189775s to acquireMachinesLock for "force-systemd-flag-140000"
	I0610 03:32:24.359731    7375 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-140000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-flag-140000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:32:24.359979    7375 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:32:24.369417    7375 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 03:32:24.415970    7375 start.go:159] libmachine.API.Create for "force-systemd-flag-140000" (driver="qemu2")
	I0610 03:32:24.416010    7375 client.go:168] LocalClient.Create starting
	I0610 03:32:24.416144    7375 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:32:24.416221    7375 main.go:141] libmachine: Decoding PEM data...
	I0610 03:32:24.416239    7375 main.go:141] libmachine: Parsing certificate...
	I0610 03:32:24.416305    7375 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:32:24.416349    7375 main.go:141] libmachine: Decoding PEM data...
	I0610 03:32:24.416362    7375 main.go:141] libmachine: Parsing certificate...
	I0610 03:32:24.416960    7375 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:32:24.573981    7375 main.go:141] libmachine: Creating SSH key...
	I0610 03:32:24.622153    7375 main.go:141] libmachine: Creating Disk image...
	I0610 03:32:24.622159    7375 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:32:24.622346    7375 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-flag-140000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-flag-140000/disk.qcow2
	I0610 03:32:24.634991    7375 main.go:141] libmachine: STDOUT: 
	I0610 03:32:24.635015    7375 main.go:141] libmachine: STDERR: 
	I0610 03:32:24.635074    7375 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-flag-140000/disk.qcow2 +20000M
	I0610 03:32:24.645861    7375 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:32:24.645882    7375 main.go:141] libmachine: STDERR: 
	I0610 03:32:24.645903    7375 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-flag-140000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-flag-140000/disk.qcow2
	I0610 03:32:24.645908    7375 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:32:24.645939    7375 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-flag-140000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-flag-140000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-flag-140000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:3a:88:a0:ac:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-flag-140000/disk.qcow2
	I0610 03:32:24.647650    7375 main.go:141] libmachine: STDOUT: 
	I0610 03:32:24.647667    7375 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:32:24.647682    7375 client.go:171] duration metric: took 231.669334ms to LocalClient.Create
	I0610 03:32:26.649986    7375 start.go:128] duration metric: took 2.290000792s to createHost
	I0610 03:32:26.650066    7375 start.go:83] releasing machines lock for "force-systemd-flag-140000", held for 2.290423833s
	W0610 03:32:26.650554    7375 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-140000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-140000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:32:26.666262    7375 out.go:177] 
	W0610 03:32:26.675217    7375 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:32:26.675254    7375 out.go:239] * 
	* 
	W0610 03:32:26.677943    7375 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:32:26.689039    7375 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-140000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-140000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-140000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.078125ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-140000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-140000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-140000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-06-10 03:32:26.783187 -0700 PDT m=+767.420418042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-140000 -n force-systemd-flag-140000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-140000 -n force-systemd-flag-140000: exit status 7 (33.708583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-140000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-140000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-140000
--- FAIL: TestForceSystemdFlag (11.31s)

                                                
                                    
x
+
TestForceSystemdEnv (10.03s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-550000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-550000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.812911667s)

                                                
                                                
-- stdout --
	* [force-systemd-env-550000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-550000" primary control-plane node in "force-systemd-env-550000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-550000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:32:11.896976    7353 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:32:11.897341    7353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:32:11.897346    7353 out.go:304] Setting ErrFile to fd 2...
	I0610 03:32:11.897348    7353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:32:11.897543    7353 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:32:11.899077    7353 out.go:298] Setting JSON to false
	I0610 03:32:11.916009    7353 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5502,"bootTime":1718010029,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:32:11.916081    7353 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:32:11.921730    7353 out.go:177] * [force-systemd-env-550000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:32:11.928813    7353 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:32:11.928855    7353 notify.go:220] Checking for updates...
	I0610 03:32:11.933797    7353 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:32:11.940634    7353 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:32:11.944765    7353 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:32:11.947751    7353 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:32:11.950673    7353 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0610 03:32:11.954167    7353 config.go:182] Loaded profile config "multinode-763000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:32:11.954224    7353 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:32:11.958781    7353 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:32:11.965786    7353 start.go:297] selected driver: qemu2
	I0610 03:32:11.965791    7353 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:32:11.965797    7353 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:32:11.967882    7353 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:32:11.970788    7353 out.go:177] * Automatically selected the socket_vmnet network
	I0610 03:32:11.972216    7353 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 03:32:11.972240    7353 cni.go:84] Creating CNI manager for ""
	I0610 03:32:11.972245    7353 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:32:11.972252    7353 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 03:32:11.972276    7353 start.go:340] cluster config:
	{Name:force-systemd-env-550000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-env-550000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:32:11.976265    7353 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:32:11.983811    7353 out.go:177] * Starting "force-systemd-env-550000" primary control-plane node in "force-systemd-env-550000" cluster
	I0610 03:32:11.987770    7353 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:32:11.987786    7353 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:32:11.987790    7353 cache.go:56] Caching tarball of preloaded images
	I0610 03:32:11.987848    7353 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:32:11.987852    7353 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:32:11.987907    7353 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/force-systemd-env-550000/config.json ...
	I0610 03:32:11.987917    7353 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/force-systemd-env-550000/config.json: {Name:mkefe10f6d0bc5c819f530998c11b76ec759a129 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:32:11.988205    7353 start.go:360] acquireMachinesLock for force-systemd-env-550000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:32:11.988237    7353 start.go:364] duration metric: took 24.75µs to acquireMachinesLock for "force-systemd-env-550000"
	I0610 03:32:11.988246    7353 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-550000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-env-550000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:32:11.988277    7353 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:32:11.992767    7353 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 03:32:12.007608    7353 start.go:159] libmachine.API.Create for "force-systemd-env-550000" (driver="qemu2")
	I0610 03:32:12.007637    7353 client.go:168] LocalClient.Create starting
	I0610 03:32:12.007700    7353 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:32:12.007729    7353 main.go:141] libmachine: Decoding PEM data...
	I0610 03:32:12.007738    7353 main.go:141] libmachine: Parsing certificate...
	I0610 03:32:12.007787    7353 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:32:12.007812    7353 main.go:141] libmachine: Decoding PEM data...
	I0610 03:32:12.007819    7353 main.go:141] libmachine: Parsing certificate...
	I0610 03:32:12.008158    7353 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:32:12.148576    7353 main.go:141] libmachine: Creating SSH key...
	I0610 03:32:12.211495    7353 main.go:141] libmachine: Creating Disk image...
	I0610 03:32:12.211506    7353 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:32:12.211716    7353 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-env-550000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-env-550000/disk.qcow2
	I0610 03:32:12.224735    7353 main.go:141] libmachine: STDOUT: 
	I0610 03:32:12.224762    7353 main.go:141] libmachine: STDERR: 
	I0610 03:32:12.224854    7353 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-env-550000/disk.qcow2 +20000M
	I0610 03:32:12.239556    7353 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:32:12.239582    7353 main.go:141] libmachine: STDERR: 
	I0610 03:32:12.239622    7353 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-env-550000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-env-550000/disk.qcow2
	I0610 03:32:12.239629    7353 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:32:12.239663    7353 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-env-550000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-env-550000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-env-550000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:37:23:f2:d2:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-env-550000/disk.qcow2
	I0610 03:32:12.242177    7353 main.go:141] libmachine: STDOUT: 
	I0610 03:32:12.242204    7353 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:32:12.242234    7353 client.go:171] duration metric: took 234.593875ms to LocalClient.Create
	I0610 03:32:14.244422    7353 start.go:128] duration metric: took 2.256155s to createHost
	I0610 03:32:14.244495    7353 start.go:83] releasing machines lock for "force-systemd-env-550000", held for 2.256284084s
	W0610 03:32:14.244556    7353 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:32:14.250871    7353 out.go:177] * Deleting "force-systemd-env-550000" in qemu2 ...
	W0610 03:32:14.272340    7353 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:32:14.272377    7353 start.go:728] Will try again in 5 seconds ...
	I0610 03:32:19.274549    7353 start.go:360] acquireMachinesLock for force-systemd-env-550000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:32:19.275026    7353 start.go:364] duration metric: took 338.375µs to acquireMachinesLock for "force-systemd-env-550000"
	I0610 03:32:19.275180    7353 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-550000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.1 ClusterName:force-systemd-env-550000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:32:19.275496    7353 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:32:19.291949    7353 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 03:32:19.340140    7353 start.go:159] libmachine.API.Create for "force-systemd-env-550000" (driver="qemu2")
	I0610 03:32:19.340213    7353 client.go:168] LocalClient.Create starting
	I0610 03:32:19.340407    7353 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:32:19.340470    7353 main.go:141] libmachine: Decoding PEM data...
	I0610 03:32:19.340492    7353 main.go:141] libmachine: Parsing certificate...
	I0610 03:32:19.340577    7353 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:32:19.340624    7353 main.go:141] libmachine: Decoding PEM data...
	I0610 03:32:19.340638    7353 main.go:141] libmachine: Parsing certificate...
	I0610 03:32:19.341264    7353 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:32:19.495955    7353 main.go:141] libmachine: Creating SSH key...
	I0610 03:32:19.609717    7353 main.go:141] libmachine: Creating Disk image...
	I0610 03:32:19.609722    7353 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:32:19.609896    7353 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-env-550000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-env-550000/disk.qcow2
	I0610 03:32:19.622559    7353 main.go:141] libmachine: STDOUT: 
	I0610 03:32:19.622585    7353 main.go:141] libmachine: STDERR: 
	I0610 03:32:19.622641    7353 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-env-550000/disk.qcow2 +20000M
	I0610 03:32:19.633660    7353 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:32:19.633683    7353 main.go:141] libmachine: STDERR: 
	I0610 03:32:19.633696    7353 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-env-550000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-env-550000/disk.qcow2
	I0610 03:32:19.633701    7353 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:32:19.633746    7353 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-env-550000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-env-550000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-env-550000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:bd:51:46:0d:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/force-systemd-env-550000/disk.qcow2
	I0610 03:32:19.635405    7353 main.go:141] libmachine: STDOUT: 
	I0610 03:32:19.635425    7353 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:32:19.635442    7353 client.go:171] duration metric: took 295.219042ms to LocalClient.Create
	I0610 03:32:21.637617    7353 start.go:128] duration metric: took 2.362124959s to createHost
	I0610 03:32:21.637683    7353 start.go:83] releasing machines lock for "force-systemd-env-550000", held for 2.362667584s
	W0610 03:32:21.638015    7353 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-550000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-550000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:32:21.647604    7353 out.go:177] 
	W0610 03:32:21.654661    7353 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:32:21.654697    7353 out.go:239] * 
	* 
	W0610 03:32:21.657418    7353 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:32:21.665588    7353 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-550000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-550000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-550000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.080208ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-550000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-550000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-550000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-06-10 03:32:21.760594 -0700 PDT m=+762.397744834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-550000 -n force-systemd-env-550000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-550000 -n force-systemd-env-550000: exit status 7 (33.311916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-550000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-550000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-550000
--- FAIL: TestForceSystemdEnv (10.03s)

                                                
                                    
x
+
TestErrorSpam/setup (10.16s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-302000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-302000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 --driver=qemu2 : exit status 80 (10.160139208s)

                                                
                                                
-- stdout --
	* [nospam-302000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-302000" primary control-plane node in "nospam-302000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-302000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-302000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-302000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-302000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-302000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=19046
- KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-302000" primary control-plane node in "nospam-302000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-302000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-302000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (10.16s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.96s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-878000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-878000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.884034375s)

                                                
                                                
-- stdout --
	* [functional-878000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-878000" primary control-plane node in "functional-878000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-878000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50904 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50904 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:50904 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-878000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-878000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-878000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
- MINIKUBE_LOCATION=19046
- KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-878000" primary control-plane node in "functional-878000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-878000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:50904 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:50904 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:50904 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-878000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000: exit status 7 (73.116709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.96s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-878000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-878000 --alsologtostderr -v=8: exit status 80 (5.184030166s)

                                                
                                                
-- stdout --
	* [functional-878000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-878000" primary control-plane node in "functional-878000" cluster
	* Restarting existing qemu2 VM for "functional-878000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-878000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:20:55.613705    5946 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:20:55.613824    5946 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:20:55.613827    5946 out.go:304] Setting ErrFile to fd 2...
	I0610 03:20:55.613829    5946 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:20:55.613952    5946 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:20:55.614982    5946 out.go:298] Setting JSON to false
	I0610 03:20:55.631463    5946 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4826,"bootTime":1718010029,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:20:55.631538    5946 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:20:55.636553    5946 out.go:177] * [functional-878000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:20:55.643312    5946 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:20:55.647416    5946 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:20:55.643381    5946 notify.go:220] Checking for updates...
	I0610 03:20:55.651917    5946 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:20:55.655478    5946 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:20:55.658508    5946 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:20:55.661538    5946 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:20:55.664749    5946 config.go:182] Loaded profile config "functional-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:20:55.664804    5946 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:20:55.669443    5946 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 03:20:55.676431    5946 start.go:297] selected driver: qemu2
	I0610 03:20:55.676436    5946 start.go:901] validating driver "qemu2" against &{Name:functional-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:20:55.676510    5946 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:20:55.678823    5946 cni.go:84] Creating CNI manager for ""
	I0610 03:20:55.678840    5946 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:20:55.678890    5946 start.go:340] cluster config:
	{Name:functional-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-878000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:20:55.683342    5946 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:20:55.690460    5946 out.go:177] * Starting "functional-878000" primary control-plane node in "functional-878000" cluster
	I0610 03:20:55.694394    5946 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:20:55.694412    5946 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:20:55.694425    5946 cache.go:56] Caching tarball of preloaded images
	I0610 03:20:55.694492    5946 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:20:55.694498    5946 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:20:55.694579    5946 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/functional-878000/config.json ...
	I0610 03:20:55.695106    5946 start.go:360] acquireMachinesLock for functional-878000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:20:55.695137    5946 start.go:364] duration metric: took 24.042µs to acquireMachinesLock for "functional-878000"
	I0610 03:20:55.695146    5946 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:20:55.695152    5946 fix.go:54] fixHost starting: 
	I0610 03:20:55.695277    5946 fix.go:112] recreateIfNeeded on functional-878000: state=Stopped err=<nil>
	W0610 03:20:55.695286    5946 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:20:55.698361    5946 out.go:177] * Restarting existing qemu2 VM for "functional-878000" ...
	I0610 03:20:55.706454    5946 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:50:d9:5c:8d:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/disk.qcow2
	I0610 03:20:55.708611    5946 main.go:141] libmachine: STDOUT: 
	I0610 03:20:55.708635    5946 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:20:55.708663    5946 fix.go:56] duration metric: took 13.5115ms for fixHost
	I0610 03:20:55.708668    5946 start.go:83] releasing machines lock for "functional-878000", held for 13.526542ms
	W0610 03:20:55.708678    5946 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:20:55.708716    5946 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:20:55.708721    5946 start.go:728] Will try again in 5 seconds ...
	I0610 03:21:00.710883    5946 start.go:360] acquireMachinesLock for functional-878000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:21:00.711305    5946 start.go:364] duration metric: took 301.333µs to acquireMachinesLock for "functional-878000"
	I0610 03:21:00.711441    5946 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:21:00.711462    5946 fix.go:54] fixHost starting: 
	I0610 03:21:00.712230    5946 fix.go:112] recreateIfNeeded on functional-878000: state=Stopped err=<nil>
	W0610 03:21:00.712257    5946 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:21:00.719674    5946 out.go:177] * Restarting existing qemu2 VM for "functional-878000" ...
	I0610 03:21:00.723816    5946 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:50:d9:5c:8d:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/disk.qcow2
	I0610 03:21:00.733359    5946 main.go:141] libmachine: STDOUT: 
	I0610 03:21:00.733412    5946 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:21:00.733485    5946 fix.go:56] duration metric: took 22.020792ms for fixHost
	I0610 03:21:00.733536    5946 start.go:83] releasing machines lock for "functional-878000", held for 22.174208ms
	W0610 03:21:00.733716    5946 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-878000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-878000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:21:00.740784    5946 out.go:177] 
	W0610 03:21:00.744743    5946 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:21:00.744784    5946 out.go:239] * 
	* 
	W0610 03:21:00.747752    5946 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:21:00.753675    5946 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-878000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.185889875s for "functional-878000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000: exit status 7 (68.54775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (32.074292ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-878000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000: exit status 7 (29.858916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-878000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-878000 get po -A: exit status 1 (26.343291ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-878000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-878000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-878000\n"*: args "kubectl --context functional-878000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-878000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000: exit status 7 (30.518458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh sudo crictl images: exit status 83 (40.788875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-878000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (40.041375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-878000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (38.618958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (39.701667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-878000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 kubectl -- --context functional-878000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 kubectl -- --context functional-878000 get pods: exit status 1 (598.2335ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-878000
	* no server found for cluster "functional-878000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-878000 kubectl -- --context functional-878000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000: exit status 7 (31.692708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.63s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-878000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-878000 get pods: exit status 1 (925.380958ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-878000
	* no server found for cluster "functional-878000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-878000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000: exit status 7 (29.266125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.96s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-878000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-878000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.180308916s)

                                                
                                                
-- stdout --
	* [functional-878000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-878000" primary control-plane node in "functional-878000" cluster
	* Restarting existing qemu2 VM for "functional-878000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-878000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-878000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-878000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.180930375s for "functional-878000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000: exit status 7 (68.699792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-878000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-878000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.371083ms)

                                                
                                                
** stderr ** 
	error: context "functional-878000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-878000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000: exit status 7 (29.641958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 logs: exit status 83 (77.254791ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-625000 | jenkins | v1.33.1 | 10 Jun 24 03:19 PDT |                     |
	|         | -p download-only-625000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 10 Jun 24 03:19 PDT | 10 Jun 24 03:19 PDT |
	| delete  | -p download-only-625000                                                  | download-only-625000 | jenkins | v1.33.1 | 10 Jun 24 03:19 PDT | 10 Jun 24 03:19 PDT |
	| start   | -o=json --download-only                                                  | download-only-069000 | jenkins | v1.33.1 | 10 Jun 24 03:19 PDT |                     |
	|         | -p download-only-069000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
	| delete  | -p download-only-069000                                                  | download-only-069000 | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
	| delete  | -p download-only-625000                                                  | download-only-625000 | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
	| delete  | -p download-only-069000                                                  | download-only-069000 | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
	| start   | --download-only -p                                                       | binary-mirror-699000 | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
	|         | binary-mirror-699000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:50868                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-699000                                                  | binary-mirror-699000 | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
	| addons  | disable dashboard -p                                                     | addons-028000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
	|         | addons-028000                                                            |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                      | addons-028000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
	|         | addons-028000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-028000 --wait=true                                             | addons-028000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-028000                                                         | addons-028000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
	| start   | -p nospam-302000 -n=1 --memory=2250 --wait=false                         | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-302000                                                         | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
	| start   | -p functional-878000                                                     | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-878000                                                     | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-878000 cache add                                              | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-878000 cache add                                              | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-878000 cache add                                              | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-878000 cache add                                              | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
	|         | minikube-local-cache-test:functional-878000                              |                      |         |         |                     |                     |
	| cache   | functional-878000 cache delete                                           | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
	|         | minikube-local-cache-test:functional-878000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
	| ssh     | functional-878000 ssh sudo                                               | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-878000                                                        | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-878000 ssh                                                    | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-878000 cache reload                                           | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
	| ssh     | functional-878000 ssh                                                    | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-878000 kubectl --                                             | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT |                     |
	|         | --context functional-878000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-878000                                                     | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 03:21:07
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 03:21:07.253844    6027 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:21:07.253964    6027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:21:07.253966    6027 out.go:304] Setting ErrFile to fd 2...
	I0610 03:21:07.253968    6027 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:21:07.254103    6027 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:21:07.255174    6027 out.go:298] Setting JSON to false
	I0610 03:21:07.271438    6027 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4838,"bootTime":1718010029,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:21:07.271499    6027 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:21:07.277174    6027 out.go:177] * [functional-878000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:21:07.284204    6027 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:21:07.284265    6027 notify.go:220] Checking for updates...
	I0610 03:21:07.288178    6027 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:21:07.292151    6027 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:21:07.296134    6027 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:21:07.299150    6027 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:21:07.302141    6027 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:21:07.305414    6027 config.go:182] Loaded profile config "functional-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:21:07.305473    6027 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:21:07.310193    6027 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 03:21:07.317186    6027 start.go:297] selected driver: qemu2
	I0610 03:21:07.317190    6027 start.go:901] validating driver "qemu2" against &{Name:functional-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:21:07.317244    6027 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:21:07.319550    6027 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:21:07.319593    6027 cni.go:84] Creating CNI manager for ""
	I0610 03:21:07.319601    6027 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:21:07.319645    6027 start.go:340] cluster config:
	{Name:functional-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-878000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:21:07.323969    6027 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:21:07.332189    6027 out.go:177] * Starting "functional-878000" primary control-plane node in "functional-878000" cluster
	I0610 03:21:07.336111    6027 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:21:07.336121    6027 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:21:07.336128    6027 cache.go:56] Caching tarball of preloaded images
	I0610 03:21:07.336190    6027 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:21:07.336194    6027 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:21:07.336242    6027 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/functional-878000/config.json ...
	I0610 03:21:07.336711    6027 start.go:360] acquireMachinesLock for functional-878000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:21:07.336743    6027 start.go:364] duration metric: took 28.5µs to acquireMachinesLock for "functional-878000"
	I0610 03:21:07.336750    6027 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:21:07.336755    6027 fix.go:54] fixHost starting: 
	I0610 03:21:07.336870    6027 fix.go:112] recreateIfNeeded on functional-878000: state=Stopped err=<nil>
	W0610 03:21:07.336876    6027 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:21:07.345125    6027 out.go:177] * Restarting existing qemu2 VM for "functional-878000" ...
	I0610 03:21:07.348999    6027 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:50:d9:5c:8d:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/disk.qcow2
	I0610 03:21:07.350989    6027 main.go:141] libmachine: STDOUT: 
	I0610 03:21:07.351004    6027 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:21:07.351037    6027 fix.go:56] duration metric: took 14.281875ms for fixHost
	I0610 03:21:07.351041    6027 start.go:83] releasing machines lock for "functional-878000", held for 14.294875ms
	W0610 03:21:07.351047    6027 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:21:07.351098    6027 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:21:07.351103    6027 start.go:728] Will try again in 5 seconds ...
	I0610 03:21:12.353212    6027 start.go:360] acquireMachinesLock for functional-878000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:21:12.353669    6027 start.go:364] duration metric: took 355.167µs to acquireMachinesLock for "functional-878000"
	I0610 03:21:12.353852    6027 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:21:12.353874    6027 fix.go:54] fixHost starting: 
	I0610 03:21:12.354611    6027 fix.go:112] recreateIfNeeded on functional-878000: state=Stopped err=<nil>
	W0610 03:21:12.354636    6027 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:21:12.358306    6027 out.go:177] * Restarting existing qemu2 VM for "functional-878000" ...
	I0610 03:21:12.363284    6027 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:50:d9:5c:8d:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/disk.qcow2
	I0610 03:21:12.373143    6027 main.go:141] libmachine: STDOUT: 
	I0610 03:21:12.373201    6027 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:21:12.373311    6027 fix.go:56] duration metric: took 19.440417ms for fixHost
	I0610 03:21:12.373323    6027 start.go:83] releasing machines lock for "functional-878000", held for 19.591666ms
	W0610 03:21:12.373527    6027 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-878000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:21:12.380186    6027 out.go:177] 
	W0610 03:21:12.384223    6027 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:21:12.384252    6027 out.go:239] * 
	W0610 03:21:12.386781    6027 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:21:12.393104    6027 out.go:177] 
	
	
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-878000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-625000 | jenkins | v1.33.1 | 10 Jun 24 03:19 PDT |                     |
|         | -p download-only-625000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 10 Jun 24 03:19 PDT | 10 Jun 24 03:19 PDT |
| delete  | -p download-only-625000                                                  | download-only-625000 | jenkins | v1.33.1 | 10 Jun 24 03:19 PDT | 10 Jun 24 03:19 PDT |
| start   | -o=json --download-only                                                  | download-only-069000 | jenkins | v1.33.1 | 10 Jun 24 03:19 PDT |                     |
|         | -p download-only-069000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
| delete  | -p download-only-069000                                                  | download-only-069000 | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
| delete  | -p download-only-625000                                                  | download-only-625000 | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
| delete  | -p download-only-069000                                                  | download-only-069000 | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
| start   | --download-only -p                                                       | binary-mirror-699000 | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | binary-mirror-699000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50868                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-699000                                                  | binary-mirror-699000 | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
| addons  | disable dashboard -p                                                     | addons-028000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | addons-028000                                                            |                      |         |         |                     |                     |
| addons  | enable dashboard -p                                                      | addons-028000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | addons-028000                                                            |                      |         |         |                     |                     |
| start   | -p addons-028000 --wait=true                                             | addons-028000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-028000                                                         | addons-028000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
| start   | -p nospam-302000 -n=1 --memory=2250 --wait=false                         | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-302000                                                         | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
| start   | -p functional-878000                                                     | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-878000                                                     | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-878000 cache add                                              | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-878000 cache add                                              | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-878000 cache add                                              | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-878000 cache add                                              | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
|         | minikube-local-cache-test:functional-878000                              |                      |         |         |                     |                     |
| cache   | functional-878000 cache delete                                           | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
|         | minikube-local-cache-test:functional-878000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
| ssh     | functional-878000 ssh sudo                                               | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-878000                                                        | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-878000 ssh                                                    | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-878000 cache reload                                           | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
| ssh     | functional-878000 ssh                                                    | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-878000 kubectl --                                             | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT |                     |
|         | --context functional-878000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-878000                                                     | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/06/10 03:21:07
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.3 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0610 03:21:07.253844    6027 out.go:291] Setting OutFile to fd 1 ...
I0610 03:21:07.253964    6027 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 03:21:07.253966    6027 out.go:304] Setting ErrFile to fd 2...
I0610 03:21:07.253968    6027 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 03:21:07.254103    6027 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
I0610 03:21:07.255174    6027 out.go:298] Setting JSON to false
I0610 03:21:07.271438    6027 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4838,"bootTime":1718010029,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0610 03:21:07.271499    6027 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0610 03:21:07.277174    6027 out.go:177] * [functional-878000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
I0610 03:21:07.284204    6027 out.go:177]   - MINIKUBE_LOCATION=19046
I0610 03:21:07.284265    6027 notify.go:220] Checking for updates...
I0610 03:21:07.288178    6027 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
I0610 03:21:07.292151    6027 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0610 03:21:07.296134    6027 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0610 03:21:07.299150    6027 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
I0610 03:21:07.302141    6027 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0610 03:21:07.305414    6027 config.go:182] Loaded profile config "functional-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 03:21:07.305473    6027 driver.go:392] Setting default libvirt URI to qemu:///system
I0610 03:21:07.310193    6027 out.go:177] * Using the qemu2 driver based on existing profile
I0610 03:21:07.317186    6027 start.go:297] selected driver: qemu2
I0610 03:21:07.317190    6027 start.go:901] validating driver "qemu2" against &{Name:functional-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:functional-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0610 03:21:07.317244    6027 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0610 03:21:07.319550    6027 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0610 03:21:07.319593    6027 cni.go:84] Creating CNI manager for ""
I0610 03:21:07.319601    6027 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0610 03:21:07.319645    6027 start.go:340] cluster config:
{Name:functional-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-878000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0610 03:21:07.323969    6027 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0610 03:21:07.332189    6027 out.go:177] * Starting "functional-878000" primary control-plane node in "functional-878000" cluster
I0610 03:21:07.336111    6027 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0610 03:21:07.336121    6027 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
I0610 03:21:07.336128    6027 cache.go:56] Caching tarball of preloaded images
I0610 03:21:07.336190    6027 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0610 03:21:07.336194    6027 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
I0610 03:21:07.336242    6027 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/functional-878000/config.json ...
I0610 03:21:07.336711    6027 start.go:360] acquireMachinesLock for functional-878000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0610 03:21:07.336743    6027 start.go:364] duration metric: took 28.5µs to acquireMachinesLock for "functional-878000"
I0610 03:21:07.336750    6027 start.go:96] Skipping create...Using existing machine configuration
I0610 03:21:07.336755    6027 fix.go:54] fixHost starting: 
I0610 03:21:07.336870    6027 fix.go:112] recreateIfNeeded on functional-878000: state=Stopped err=<nil>
W0610 03:21:07.336876    6027 fix.go:138] unexpected machine state, will restart: <nil>
I0610 03:21:07.345125    6027 out.go:177] * Restarting existing qemu2 VM for "functional-878000" ...
I0610 03:21:07.348999    6027 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:50:d9:5c:8d:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/disk.qcow2
I0610 03:21:07.350989    6027 main.go:141] libmachine: STDOUT: 
I0610 03:21:07.351004    6027 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0610 03:21:07.351037    6027 fix.go:56] duration metric: took 14.281875ms for fixHost
I0610 03:21:07.351041    6027 start.go:83] releasing machines lock for "functional-878000", held for 14.294875ms
W0610 03:21:07.351047    6027 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0610 03:21:07.351098    6027 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0610 03:21:07.351103    6027 start.go:728] Will try again in 5 seconds ...
I0610 03:21:12.353212    6027 start.go:360] acquireMachinesLock for functional-878000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0610 03:21:12.353669    6027 start.go:364] duration metric: took 355.167µs to acquireMachinesLock for "functional-878000"
I0610 03:21:12.353852    6027 start.go:96] Skipping create...Using existing machine configuration
I0610 03:21:12.353874    6027 fix.go:54] fixHost starting: 
I0610 03:21:12.354611    6027 fix.go:112] recreateIfNeeded on functional-878000: state=Stopped err=<nil>
W0610 03:21:12.354636    6027 fix.go:138] unexpected machine state, will restart: <nil>
I0610 03:21:12.358306    6027 out.go:177] * Restarting existing qemu2 VM for "functional-878000" ...
I0610 03:21:12.363284    6027 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:50:d9:5c:8d:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/disk.qcow2
I0610 03:21:12.373143    6027 main.go:141] libmachine: STDOUT: 
I0610 03:21:12.373201    6027 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0610 03:21:12.373311    6027 fix.go:56] duration metric: took 19.440417ms for fixHost
I0610 03:21:12.373323    6027 start.go:83] releasing machines lock for "functional-878000", held for 19.591666ms
W0610 03:21:12.373527    6027 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-878000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0610 03:21:12.380186    6027 out.go:177] 
W0610 03:21:12.384223    6027 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0610 03:21:12.384252    6027 out.go:239] * 
W0610 03:21:12.386781    6027 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0610 03:21:12.393104    6027 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-878000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-878000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd3000334833/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-625000 | jenkins | v1.33.1 | 10 Jun 24 03:19 PDT |                     |
|         | -p download-only-625000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 10 Jun 24 03:19 PDT | 10 Jun 24 03:19 PDT |
| delete  | -p download-only-625000                                                  | download-only-625000 | jenkins | v1.33.1 | 10 Jun 24 03:19 PDT | 10 Jun 24 03:19 PDT |
| start   | -o=json --download-only                                                  | download-only-069000 | jenkins | v1.33.1 | 10 Jun 24 03:19 PDT |                     |
|         | -p download-only-069000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
| delete  | -p download-only-069000                                                  | download-only-069000 | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
| delete  | -p download-only-625000                                                  | download-only-625000 | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
| delete  | -p download-only-069000                                                  | download-only-069000 | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
| start   | --download-only -p                                                       | binary-mirror-699000 | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | binary-mirror-699000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50868                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-699000                                                  | binary-mirror-699000 | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
| addons  | disable dashboard -p                                                     | addons-028000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | addons-028000                                                            |                      |         |         |                     |                     |
| addons  | enable dashboard -p                                                      | addons-028000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | addons-028000                                                            |                      |         |         |                     |                     |
| start   | -p addons-028000 --wait=true                                             | addons-028000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-028000                                                         | addons-028000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
| start   | -p nospam-302000 -n=1 --memory=2250 --wait=false                         | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-302000 --log_dir                                                  | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-302000                                                         | nospam-302000        | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT | 10 Jun 24 03:20 PDT |
| start   | -p functional-878000                                                     | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-878000                                                     | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:20 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-878000 cache add                                              | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-878000 cache add                                              | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-878000 cache add                                              | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-878000 cache add                                              | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
|         | minikube-local-cache-test:functional-878000                              |                      |         |         |                     |                     |
| cache   | functional-878000 cache delete                                           | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
|         | minikube-local-cache-test:functional-878000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
| ssh     | functional-878000 ssh sudo                                               | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-878000                                                        | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-878000 ssh                                                    | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-878000 cache reload                                           | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
| ssh     | functional-878000 ssh                                                    | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT | 10 Jun 24 03:21 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-878000 kubectl --                                             | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT |                     |
|         | --context functional-878000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-878000                                                     | functional-878000    | jenkins | v1.33.1 | 10 Jun 24 03:21 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/06/10 03:21:07
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.3 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0610 03:21:07.253844    6027 out.go:291] Setting OutFile to fd 1 ...
I0610 03:21:07.253964    6027 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 03:21:07.253966    6027 out.go:304] Setting ErrFile to fd 2...
I0610 03:21:07.253968    6027 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 03:21:07.254103    6027 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
I0610 03:21:07.255174    6027 out.go:298] Setting JSON to false
I0610 03:21:07.271438    6027 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4838,"bootTime":1718010029,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0610 03:21:07.271499    6027 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0610 03:21:07.277174    6027 out.go:177] * [functional-878000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
I0610 03:21:07.284204    6027 out.go:177]   - MINIKUBE_LOCATION=19046
I0610 03:21:07.284265    6027 notify.go:220] Checking for updates...
I0610 03:21:07.288178    6027 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
I0610 03:21:07.292151    6027 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0610 03:21:07.296134    6027 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0610 03:21:07.299150    6027 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
I0610 03:21:07.302141    6027 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0610 03:21:07.305414    6027 config.go:182] Loaded profile config "functional-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 03:21:07.305473    6027 driver.go:392] Setting default libvirt URI to qemu:///system
I0610 03:21:07.310193    6027 out.go:177] * Using the qemu2 driver based on existing profile
I0610 03:21:07.317186    6027 start.go:297] selected driver: qemu2
I0610 03:21:07.317190    6027 start.go:901] validating driver "qemu2" against &{Name:functional-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:functional-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0610 03:21:07.317244    6027 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0610 03:21:07.319550    6027 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0610 03:21:07.319593    6027 cni.go:84] Creating CNI manager for ""
I0610 03:21:07.319601    6027 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0610 03:21:07.319645    6027 start.go:340] cluster config:
{Name:functional-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-878000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0610 03:21:07.323969    6027 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0610 03:21:07.332189    6027 out.go:177] * Starting "functional-878000" primary control-plane node in "functional-878000" cluster
I0610 03:21:07.336111    6027 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0610 03:21:07.336121    6027 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
I0610 03:21:07.336128    6027 cache.go:56] Caching tarball of preloaded images
I0610 03:21:07.336190    6027 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0610 03:21:07.336194    6027 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
I0610 03:21:07.336242    6027 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/functional-878000/config.json ...
I0610 03:21:07.336711    6027 start.go:360] acquireMachinesLock for functional-878000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0610 03:21:07.336743    6027 start.go:364] duration metric: took 28.5µs to acquireMachinesLock for "functional-878000"
I0610 03:21:07.336750    6027 start.go:96] Skipping create...Using existing machine configuration
I0610 03:21:07.336755    6027 fix.go:54] fixHost starting: 
I0610 03:21:07.336870    6027 fix.go:112] recreateIfNeeded on functional-878000: state=Stopped err=<nil>
W0610 03:21:07.336876    6027 fix.go:138] unexpected machine state, will restart: <nil>
I0610 03:21:07.345125    6027 out.go:177] * Restarting existing qemu2 VM for "functional-878000" ...
I0610 03:21:07.348999    6027 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:50:d9:5c:8d:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/disk.qcow2
I0610 03:21:07.350989    6027 main.go:141] libmachine: STDOUT: 
I0610 03:21:07.351004    6027 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0610 03:21:07.351037    6027 fix.go:56] duration metric: took 14.281875ms for fixHost
I0610 03:21:07.351041    6027 start.go:83] releasing machines lock for "functional-878000", held for 14.294875ms
W0610 03:21:07.351047    6027 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0610 03:21:07.351098    6027 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0610 03:21:07.351103    6027 start.go:728] Will try again in 5 seconds ...
I0610 03:21:12.353212    6027 start.go:360] acquireMachinesLock for functional-878000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0610 03:21:12.353669    6027 start.go:364] duration metric: took 355.167µs to acquireMachinesLock for "functional-878000"
I0610 03:21:12.353852    6027 start.go:96] Skipping create...Using existing machine configuration
I0610 03:21:12.353874    6027 fix.go:54] fixHost starting: 
I0610 03:21:12.354611    6027 fix.go:112] recreateIfNeeded on functional-878000: state=Stopped err=<nil>
W0610 03:21:12.354636    6027 fix.go:138] unexpected machine state, will restart: <nil>
I0610 03:21:12.358306    6027 out.go:177] * Restarting existing qemu2 VM for "functional-878000" ...
I0610 03:21:12.363284    6027 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:50:d9:5c:8d:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/functional-878000/disk.qcow2
I0610 03:21:12.373143    6027 main.go:141] libmachine: STDOUT: 
I0610 03:21:12.373201    6027 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0610 03:21:12.373311    6027 fix.go:56] duration metric: took 19.440417ms for fixHost
I0610 03:21:12.373323    6027 start.go:83] releasing machines lock for "functional-878000", held for 19.591666ms
W0610 03:21:12.373527    6027 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-878000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0610 03:21:12.380186    6027 out.go:177] 
W0610 03:21:12.384223    6027 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0610 03:21:12.384252    6027 out.go:239] * 
W0610 03:21:12.386781    6027 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0610 03:21:12.393104    6027 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-878000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-878000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.517209ms)

                                                
                                                
** stderr ** 
	error: context "functional-878000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-878000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-878000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-878000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-878000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-878000 --alsologtostderr -v=1] stderr:
I0610 03:22:01.856302    6356 out.go:291] Setting OutFile to fd 1 ...
I0610 03:22:01.856716    6356 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 03:22:01.856720    6356 out.go:304] Setting ErrFile to fd 2...
I0610 03:22:01.856722    6356 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 03:22:01.856928    6356 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
I0610 03:22:01.857128    6356 mustload.go:65] Loading cluster: functional-878000
I0610 03:22:01.857354    6356 config.go:182] Loaded profile config "functional-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 03:22:01.859016    6356 out.go:177] * The control-plane node functional-878000 host is not running: state=Stopped
I0610 03:22:01.862657    6356 out.go:177]   To start a cluster, run: "minikube start -p functional-878000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000: exit status 7 (43.61525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 status: exit status 7 (30.70625ms)

                                                
                                                
-- stdout --
	functional-878000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-878000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (30.320917ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-878000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 status -o json: exit status 7 (30.044ms)

                                                
                                                
-- stdout --
	{"Name":"functional-878000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-878000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000: exit status 7 (29.65125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-878000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-878000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.412958ms)

                                                
                                                
** stderr ** 
	error: context "functional-878000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-878000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-878000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-878000 describe po hello-node-connect: exit status 1 (26.678459ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-878000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-878000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-878000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-878000 logs -l app=hello-node-connect: exit status 1 (26.603083ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-878000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-878000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-878000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-878000 describe svc hello-node-connect: exit status 1 (26.663208ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-878000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-878000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000: exit status 7 (30.60525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-878000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000: exit status 7 (30.903375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "echo hello": exit status 83 (46.553917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-878000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-878000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-878000\"\n"*. args "out/minikube-darwin-arm64 -p functional-878000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "cat /etc/hostname": exit status 83 (45.942916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-878000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-878000"- but got *"* The control-plane node functional-878000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-878000\"\n"*. args "out/minikube-darwin-arm64 -p functional-878000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000: exit status 7 (30.164542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (54.152584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-878000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh -n functional-878000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh -n functional-878000 "sudo cat /home/docker/cp-test.txt": exit status 83 (43.915417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-878000 ssh -n functional-878000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-878000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-878000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 cp functional-878000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2654294857/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 cp functional-878000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2654294857/001/cp-test.txt: exit status 83 (46.740042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-878000 cp functional-878000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2654294857/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh -n functional-878000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh -n functional-878000 "sudo cat /home/docker/cp-test.txt": exit status 83 (42.693917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-878000 ssh -n functional-878000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2654294857/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-878000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-878000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (48.809708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-878000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh -n functional-878000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh -n functional-878000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (39.986375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-878000 ssh -n functional-878000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-878000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-878000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/5687/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "sudo cat /etc/test/nested/copy/5687/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "sudo cat /etc/test/nested/copy/5687/hosts": exit status 83 (40.3445ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-878000 ssh "sudo cat /etc/test/nested/copy/5687/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-878000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-878000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-878000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-878000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000: exit status 7 (29.901375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/5687.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "sudo cat /etc/ssl/certs/5687.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "sudo cat /etc/ssl/certs/5687.pem": exit status 83 (41.801042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/5687.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-878000 ssh \"sudo cat /etc/ssl/certs/5687.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/5687.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-878000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-878000"
	"""
)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/5687.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "sudo cat /usr/share/ca-certificates/5687.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "sudo cat /usr/share/ca-certificates/5687.pem": exit status 83 (39.811333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/5687.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-878000 ssh \"sudo cat /usr/share/ca-certificates/5687.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/5687.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-878000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-878000"
	"""
)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (38.89ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-878000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-878000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-878000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/56872.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "sudo cat /etc/ssl/certs/56872.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "sudo cat /etc/ssl/certs/56872.pem": exit status 83 (50.508ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/56872.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-878000 ssh \"sudo cat /etc/ssl/certs/56872.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/56872.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-878000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-878000"
	"""
)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/56872.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "sudo cat /usr/share/ca-certificates/56872.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "sudo cat /usr/share/ca-certificates/56872.pem": exit status 83 (43.588292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/56872.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-878000 ssh \"sudo cat /usr/share/ca-certificates/56872.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/56872.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-878000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-878000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (39.854833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-878000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-878000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-878000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000: exit status 7 (30.125792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-878000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-878000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.096709ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-878000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-878000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-878000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-878000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-878000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-878000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-878000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-878000 -n functional-878000: exit status 7 (33.27225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "sudo systemctl is-active crio": exit status 83 (39.553084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-878000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-878000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 version -o=json --components: exit status 83 (38.952708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-878000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-878000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-878000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-878000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-878000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-878000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-878000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-878000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-878000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-878000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-878000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-878000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-878000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-878000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-878000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-878000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-878000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-878000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-878000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-878000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-878000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-878000 image ls --format short --alsologtostderr:
I0610 03:22:02.256125    6371 out.go:291] Setting OutFile to fd 1 ...
I0610 03:22:02.256293    6371 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 03:22:02.256296    6371 out.go:304] Setting ErrFile to fd 2...
I0610 03:22:02.256298    6371 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 03:22:02.256432    6371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
I0610 03:22:02.257210    6371 config.go:182] Loaded profile config "functional-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 03:22:02.257333    6371 config.go:182] Loaded profile config "functional-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-878000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-878000 image ls --format table --alsologtostderr:
I0610 03:22:02.475594    6383 out.go:291] Setting OutFile to fd 1 ...
I0610 03:22:02.475746    6383 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 03:22:02.475749    6383 out.go:304] Setting ErrFile to fd 2...
I0610 03:22:02.475751    6383 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 03:22:02.475898    6383 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
I0610 03:22:02.476324    6383 config.go:182] Loaded profile config "functional-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 03:22:02.476380    6383 config.go:182] Loaded profile config "functional-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-878000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-878000 image ls --format json --alsologtostderr:
I0610 03:22:02.438661    6381 out.go:291] Setting OutFile to fd 1 ...
I0610 03:22:02.438811    6381 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 03:22:02.438813    6381 out.go:304] Setting ErrFile to fd 2...
I0610 03:22:02.438816    6381 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 03:22:02.438930    6381 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
I0610 03:22:02.439335    6381 config.go:182] Loaded profile config "functional-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 03:22:02.439397    6381 config.go:182] Loaded profile config "functional-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-878000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-878000 image ls --format yaml --alsologtostderr:
I0610 03:22:02.292603    6373 out.go:291] Setting OutFile to fd 1 ...
I0610 03:22:02.292769    6373 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 03:22:02.292773    6373 out.go:304] Setting ErrFile to fd 2...
I0610 03:22:02.292775    6373 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 03:22:02.292910    6373 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
I0610 03:22:02.293389    6373 config.go:182] Loaded profile config "functional-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 03:22:02.293448    6373 config.go:182] Loaded profile config "functional-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh pgrep buildkitd: exit status 83 (40.692541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 image build -t localhost/my-image:functional-878000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-878000 image build -t localhost/my-image:functional-878000 testdata/build --alsologtostderr:
I0610 03:22:02.368719    6377 out.go:291] Setting OutFile to fd 1 ...
I0610 03:22:02.369095    6377 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 03:22:02.369098    6377 out.go:304] Setting ErrFile to fd 2...
I0610 03:22:02.369101    6377 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 03:22:02.369233    6377 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
I0610 03:22:02.369629    6377 config.go:182] Loaded profile config "functional-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 03:22:02.370045    6377 config.go:182] Loaded profile config "functional-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 03:22:02.370277    6377 build_images.go:133] succeeded building to: 
I0610 03:22:02.370281    6377 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 image ls
functional_test.go:442: expected "localhost/my-image:functional-878000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-878000 docker-env) && out/minikube-darwin-arm64 status -p functional-878000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-878000 docker-env) && out/minikube-darwin-arm64 status -p functional-878000": exit status 1 (45.526792ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 update-context --alsologtostderr -v=2: exit status 83 (41.633ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:22:02.128688    6365 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:22:02.129828    6365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:22:02.129834    6365 out.go:304] Setting ErrFile to fd 2...
	I0610 03:22:02.129836    6365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:22:02.129981    6365 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:22:02.130194    6365 mustload.go:65] Loading cluster: functional-878000
	I0610 03:22:02.130389    6365 config.go:182] Loaded profile config "functional-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:22:02.134200    6365 out.go:177] * The control-plane node functional-878000 host is not running: state=Stopped
	I0610 03:22:02.138220    6365 out.go:177]   To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-878000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-878000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-878000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 update-context --alsologtostderr -v=2: exit status 83 (41.500708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:22:02.214519    6369 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:22:02.214652    6369 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:22:02.214655    6369 out.go:304] Setting ErrFile to fd 2...
	I0610 03:22:02.214657    6369 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:22:02.214776    6369 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:22:02.214990    6369 mustload.go:65] Loading cluster: functional-878000
	I0610 03:22:02.215203    6369 config.go:182] Loaded profile config "functional-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:22:02.219250    6369 out.go:177] * The control-plane node functional-878000 host is not running: state=Stopped
	I0610 03:22:02.223249    6369 out.go:177]   To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-878000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-878000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-878000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 update-context --alsologtostderr -v=2: exit status 83 (42.638167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:22:02.171057    6367 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:22:02.171202    6367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:22:02.171206    6367 out.go:304] Setting ErrFile to fd 2...
	I0610 03:22:02.171208    6367 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:22:02.171338    6367 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:22:02.171786    6367 mustload.go:65] Loading cluster: functional-878000
	I0610 03:22:02.172398    6367 config.go:182] Loaded profile config "functional-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:22:02.177286    6367 out.go:177] * The control-plane node functional-878000 host is not running: state=Stopped
	I0610 03:22:02.181039    6367 out.go:177]   To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-878000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-878000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-878000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-878000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-878000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.095125ms)

                                                
                                                
** stderr ** 
	error: context "functional-878000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-878000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 service list: exit status 83 (45.454041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-878000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-878000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-878000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 service list -o json: exit status 83 (45.7035ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-878000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 service --namespace=default --https --url hello-node: exit status 83 (41.794292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-878000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 service hello-node --url --format={{.IP}}: exit status 83 (42.893041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-878000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-878000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-878000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 service hello-node --url: exit status 83 (41.877958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-878000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-878000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-878000"
functional_test.go:1565: failed to parse "* The control-plane node functional-878000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-878000\"": parse "* The control-plane node functional-878000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-878000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-878000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-878000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0610 03:21:14.586766    6146 out.go:291] Setting OutFile to fd 1 ...
I0610 03:21:14.586949    6146 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 03:21:14.586952    6146 out.go:304] Setting ErrFile to fd 2...
I0610 03:21:14.586955    6146 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 03:21:14.587088    6146 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
I0610 03:21:14.587306    6146 mustload.go:65] Loading cluster: functional-878000
I0610 03:21:14.587494    6146 config.go:182] Loaded profile config "functional-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 03:21:14.591775    6146 out.go:177] * The control-plane node functional-878000 host is not running: state=Stopped
I0610 03:21:14.598843    6146 out.go:177]   To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
stdout: * The control-plane node functional-878000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-878000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-878000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-878000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-878000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-878000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 6145: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-878000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-878000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-878000": client config: context "functional-878000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (111.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-878000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-878000 get svc nginx-svc: exit status 1 (72.872667ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-878000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-878000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (111.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 image load --daemon gcr.io/google-containers/addon-resizer:functional-878000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-878000 image load --daemon gcr.io/google-containers/addon-resizer:functional-878000 --alsologtostderr: (1.318663542s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-878000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 image load --daemon gcr.io/google-containers/addon-resizer:functional-878000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-878000 image load --daemon gcr.io/google-containers/addon-resizer:functional-878000 --alsologtostderr: (1.318692875s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-878000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.180612667s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-878000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 image load --daemon gcr.io/google-containers/addon-resizer:functional-878000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-878000 image load --daemon gcr.io/google-containers/addon-resizer:functional-878000 --alsologtostderr: (1.169111084s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-878000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 image save gcr.io/google-containers/addon-resizer:functional-878000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-878000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.033289125s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 16 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (37.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (37.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-239000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-239000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.984551584s)

                                                
                                                
-- stdout --
	* [ha-239000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-239000" primary control-plane node in "ha-239000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-239000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:24:08.693638    6422 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:24:08.693781    6422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:24:08.693785    6422 out.go:304] Setting ErrFile to fd 2...
	I0610 03:24:08.693787    6422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:24:08.693919    6422 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:24:08.694963    6422 out.go:298] Setting JSON to false
	I0610 03:24:08.710985    6422 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5019,"bootTime":1718010029,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:24:08.711054    6422 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:24:08.717349    6422 out.go:177] * [ha-239000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:24:08.725348    6422 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:24:08.728262    6422 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:24:08.725423    6422 notify.go:220] Checking for updates...
	I0610 03:24:08.734205    6422 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:24:08.737265    6422 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:24:08.740193    6422 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:24:08.743263    6422 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:24:08.746432    6422 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:24:08.750232    6422 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:24:08.757275    6422 start.go:297] selected driver: qemu2
	I0610 03:24:08.757280    6422 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:24:08.757286    6422 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:24:08.759477    6422 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:24:08.762257    6422 out.go:177] * Automatically selected the socket_vmnet network
	I0610 03:24:08.765404    6422 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:24:08.765443    6422 cni.go:84] Creating CNI manager for ""
	I0610 03:24:08.765447    6422 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0610 03:24:08.765450    6422 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 03:24:08.765488    6422 start.go:340] cluster config:
	{Name:ha-239000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:24:08.770045    6422 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:24:08.777163    6422 out.go:177] * Starting "ha-239000" primary control-plane node in "ha-239000" cluster
	I0610 03:24:08.781265    6422 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:24:08.781285    6422 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:24:08.781292    6422 cache.go:56] Caching tarball of preloaded images
	I0610 03:24:08.781363    6422 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:24:08.781369    6422 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:24:08.781584    6422 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/ha-239000/config.json ...
	I0610 03:24:08.781599    6422 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/ha-239000/config.json: {Name:mk4a8400f98e727224740c268b078109b51242a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:24:08.782000    6422 start.go:360] acquireMachinesLock for ha-239000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:24:08.782034    6422 start.go:364] duration metric: took 27.5µs to acquireMachinesLock for "ha-239000"
	I0610 03:24:08.782044    6422 start.go:93] Provisioning new machine with config: &{Name:ha-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.1 ClusterName:ha-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:24:08.782075    6422 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:24:08.790224    6422 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 03:24:08.805121    6422 start.go:159] libmachine.API.Create for "ha-239000" (driver="qemu2")
	I0610 03:24:08.805142    6422 client.go:168] LocalClient.Create starting
	I0610 03:24:08.805202    6422 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:24:08.805239    6422 main.go:141] libmachine: Decoding PEM data...
	I0610 03:24:08.805268    6422 main.go:141] libmachine: Parsing certificate...
	I0610 03:24:08.805311    6422 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:24:08.805334    6422 main.go:141] libmachine: Decoding PEM data...
	I0610 03:24:08.805342    6422 main.go:141] libmachine: Parsing certificate...
	I0610 03:24:08.805799    6422 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:24:08.949110    6422 main.go:141] libmachine: Creating SSH key...
	I0610 03:24:09.066014    6422 main.go:141] libmachine: Creating Disk image...
	I0610 03:24:09.066019    6422 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:24:09.066188    6422 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/disk.qcow2
	I0610 03:24:09.079190    6422 main.go:141] libmachine: STDOUT: 
	I0610 03:24:09.079213    6422 main.go:141] libmachine: STDERR: 
	I0610 03:24:09.079271    6422 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/disk.qcow2 +20000M
	I0610 03:24:09.090165    6422 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:24:09.090179    6422 main.go:141] libmachine: STDERR: 
	I0610 03:24:09.090195    6422 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/disk.qcow2
	I0610 03:24:09.090198    6422 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:24:09.090230    6422 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:28:01:7b:5b:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/disk.qcow2
	I0610 03:24:09.091840    6422 main.go:141] libmachine: STDOUT: 
	I0610 03:24:09.091853    6422 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:24:09.091872    6422 client.go:171] duration metric: took 286.72875ms to LocalClient.Create
	I0610 03:24:11.094015    6422 start.go:128] duration metric: took 2.311954667s to createHost
	I0610 03:24:11.094086    6422 start.go:83] releasing machines lock for "ha-239000", held for 2.31207875s
	W0610 03:24:11.094183    6422 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:24:11.101711    6422 out.go:177] * Deleting "ha-239000" in qemu2 ...
	W0610 03:24:11.131375    6422 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:24:11.131408    6422 start.go:728] Will try again in 5 seconds ...
	I0610 03:24:16.133520    6422 start.go:360] acquireMachinesLock for ha-239000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:24:16.133963    6422 start.go:364] duration metric: took 319.125µs to acquireMachinesLock for "ha-239000"
	I0610 03:24:16.134078    6422 start.go:93] Provisioning new machine with config: &{Name:ha-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.1 ClusterName:ha-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:24:16.134352    6422 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:24:16.144906    6422 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 03:24:16.186800    6422 start.go:159] libmachine.API.Create for "ha-239000" (driver="qemu2")
	I0610 03:24:16.186846    6422 client.go:168] LocalClient.Create starting
	I0610 03:24:16.186976    6422 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:24:16.187037    6422 main.go:141] libmachine: Decoding PEM data...
	I0610 03:24:16.187053    6422 main.go:141] libmachine: Parsing certificate...
	I0610 03:24:16.187114    6422 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:24:16.187160    6422 main.go:141] libmachine: Decoding PEM data...
	I0610 03:24:16.187175    6422 main.go:141] libmachine: Parsing certificate...
	I0610 03:24:16.187684    6422 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:24:16.339457    6422 main.go:141] libmachine: Creating SSH key...
	I0610 03:24:16.577718    6422 main.go:141] libmachine: Creating Disk image...
	I0610 03:24:16.577742    6422 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:24:16.577967    6422 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/disk.qcow2
	I0610 03:24:16.591197    6422 main.go:141] libmachine: STDOUT: 
	I0610 03:24:16.591221    6422 main.go:141] libmachine: STDERR: 
	I0610 03:24:16.591288    6422 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/disk.qcow2 +20000M
	I0610 03:24:16.602281    6422 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:24:16.602294    6422 main.go:141] libmachine: STDERR: 
	I0610 03:24:16.602304    6422 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/disk.qcow2
	I0610 03:24:16.602313    6422 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:24:16.602350    6422 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:06:b3:bb:43:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/disk.qcow2
	I0610 03:24:16.604016    6422 main.go:141] libmachine: STDOUT: 
	I0610 03:24:16.604030    6422 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:24:16.604042    6422 client.go:171] duration metric: took 417.19675ms to LocalClient.Create
	I0610 03:24:18.606362    6422 start.go:128] duration metric: took 2.472005208s to createHost
	I0610 03:24:18.606457    6422 start.go:83] releasing machines lock for "ha-239000", held for 2.472488625s
	W0610 03:24:18.606834    6422 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-239000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-239000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:24:18.621928    6422 out.go:177] 
	W0610 03:24:18.626127    6422 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:24:18.626152    6422 out.go:239] * 
	* 
	W0610 03:24:18.628574    6422 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:24:18.636899    6422 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-239000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000: exit status 7 (68.5855ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (113.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-239000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-239000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (59.899542ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-239000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-239000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-239000 -- rollout status deployment/busybox: exit status 1 (56.740292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-239000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.616792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-239000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.593875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-239000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.352416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-239000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.19775ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-239000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.270084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-239000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.177833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-239000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.475792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-239000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.104958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-239000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.232084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-239000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.202959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-239000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.48625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-239000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.445125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-239000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-239000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-239000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.769542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-239000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-239000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-239000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.2065ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-239000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-239000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-239000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.872ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-239000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000: exit status 7 (29.68175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (113.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-239000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.916209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-239000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000: exit status 7 (30.712833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-239000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-239000 -v=7 --alsologtostderr: exit status 83 (42.266667ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-239000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-239000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:26:11.998424    6515 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:26:11.999006    6515 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:11.999010    6515 out.go:304] Setting ErrFile to fd 2...
	I0610 03:26:11.999012    6515 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:11.999186    6515 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:26:11.999402    6515 mustload.go:65] Loading cluster: ha-239000
	I0610 03:26:11.999579    6515 config.go:182] Loaded profile config "ha-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:26:12.004059    6515 out.go:177] * The control-plane node ha-239000 host is not running: state=Stopped
	I0610 03:26:12.008796    6515 out.go:177]   To start a cluster, run: "minikube start -p ha-239000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-239000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000: exit status 7 (29.32775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-239000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-239000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.576166ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-239000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-239000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-239000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000: exit status 7 (30.806125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-239000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-239000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-239000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-239000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-239000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-239000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-239000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-239000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000: exit status 7 (29.41175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-239000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-239000 status --output json -v=7 --alsologtostderr: exit status 7 (30.091459ms)

                                                
                                                
-- stdout --
	{"Name":"ha-239000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:26:12.227703    6528 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:26:12.227857    6528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:12.227860    6528 out.go:304] Setting ErrFile to fd 2...
	I0610 03:26:12.227863    6528 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:12.228003    6528 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:26:12.228123    6528 out.go:298] Setting JSON to true
	I0610 03:26:12.228134    6528 mustload.go:65] Loading cluster: ha-239000
	I0610 03:26:12.228183    6528 notify.go:220] Checking for updates...
	I0610 03:26:12.228343    6528 config.go:182] Loaded profile config "ha-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:26:12.228350    6528 status.go:255] checking status of ha-239000 ...
	I0610 03:26:12.228546    6528 status.go:330] ha-239000 host status = "Stopped" (err=<nil>)
	I0610 03:26:12.228550    6528 status.go:343] host is not running, skipping remaining checks
	I0610 03:26:12.228552    6528 status.go:257] ha-239000 status: &{Name:ha-239000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-239000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000: exit status 7 (29.94275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-239000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-239000 node stop m02 -v=7 --alsologtostderr: exit status 85 (47.398458ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:26:12.288500    6532 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:26:12.289077    6532 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:12.289080    6532 out.go:304] Setting ErrFile to fd 2...
	I0610 03:26:12.289083    6532 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:12.289303    6532 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:26:12.289530    6532 mustload.go:65] Loading cluster: ha-239000
	I0610 03:26:12.289724    6532 config.go:182] Loaded profile config "ha-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:26:12.293765    6532 out.go:177] 
	W0610 03:26:12.296712    6532 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0610 03:26:12.296716    6532 out.go:239] * 
	* 
	W0610 03:26:12.298561    6532 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:26:12.302695    6532 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-239000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr: exit status 7 (30.192083ms)

                                                
                                                
-- stdout --
	ha-239000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:26:12.335956    6534 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:26:12.336126    6534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:12.336130    6534 out.go:304] Setting ErrFile to fd 2...
	I0610 03:26:12.336132    6534 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:12.336283    6534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:26:12.336416    6534 out.go:298] Setting JSON to false
	I0610 03:26:12.336426    6534 mustload.go:65] Loading cluster: ha-239000
	I0610 03:26:12.336486    6534 notify.go:220] Checking for updates...
	I0610 03:26:12.336607    6534 config.go:182] Loaded profile config "ha-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:26:12.336616    6534 status.go:255] checking status of ha-239000 ...
	I0610 03:26:12.336837    6534 status.go:330] ha-239000 host status = "Stopped" (err=<nil>)
	I0610 03:26:12.336841    6534 status.go:343] host is not running, skipping remaining checks
	I0610 03:26:12.336843    6534 status.go:257] ha-239000 status: &{Name:ha-239000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr": ha-239000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr": ha-239000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr": ha-239000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr": ha-239000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000: exit status 7 (30.281ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-239000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-239000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-239000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-239000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000: exit status 7 (29.428625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (50.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-239000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-239000 node start m02 -v=7 --alsologtostderr: exit status 85 (48.672917ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:26:12.499483    6544 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:26:12.499879    6544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:12.499883    6544 out.go:304] Setting ErrFile to fd 2...
	I0610 03:26:12.499885    6544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:12.500044    6544 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:26:12.500259    6544 mustload.go:65] Loading cluster: ha-239000
	I0610 03:26:12.500447    6544 config.go:182] Loaded profile config "ha-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:26:12.504652    6544 out.go:177] 
	W0610 03:26:12.508723    6544 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0610 03:26:12.508728    6544 out.go:239] * 
	* 
	W0610 03:26:12.510612    6544 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:26:12.514668    6544 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0610 03:26:12.499483    6544 out.go:291] Setting OutFile to fd 1 ...
I0610 03:26:12.499879    6544 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 03:26:12.499883    6544 out.go:304] Setting ErrFile to fd 2...
I0610 03:26:12.499885    6544 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 03:26:12.500044    6544 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
I0610 03:26:12.500259    6544 mustload.go:65] Loading cluster: ha-239000
I0610 03:26:12.500447    6544 config.go:182] Loaded profile config "ha-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 03:26:12.504652    6544 out.go:177] 
W0610 03:26:12.508723    6544 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0610 03:26:12.508728    6544 out.go:239] * 
* 
W0610 03:26:12.510612    6544 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0610 03:26:12.514668    6544 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-239000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr: exit status 7 (29.127291ms)

                                                
                                                
-- stdout --
	ha-239000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:26:12.547208    6546 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:26:12.547344    6546 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:12.547347    6546 out.go:304] Setting ErrFile to fd 2...
	I0610 03:26:12.547349    6546 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:12.547484    6546 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:26:12.547594    6546 out.go:298] Setting JSON to false
	I0610 03:26:12.547603    6546 mustload.go:65] Loading cluster: ha-239000
	I0610 03:26:12.547665    6546 notify.go:220] Checking for updates...
	I0610 03:26:12.547795    6546 config.go:182] Loaded profile config "ha-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:26:12.547802    6546 status.go:255] checking status of ha-239000 ...
	I0610 03:26:12.548013    6546 status.go:330] ha-239000 host status = "Stopped" (err=<nil>)
	I0610 03:26:12.548017    6546 status.go:343] host is not running, skipping remaining checks
	I0610 03:26:12.548020    6546 status.go:257] ha-239000 status: &{Name:ha-239000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr: exit status 7 (74.420125ms)

                                                
                                                
-- stdout --
	ha-239000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:26:13.135699    6548 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:26:13.135895    6548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:13.135899    6548 out.go:304] Setting ErrFile to fd 2...
	I0610 03:26:13.135902    6548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:13.136108    6548 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:26:13.136270    6548 out.go:298] Setting JSON to false
	I0610 03:26:13.136284    6548 mustload.go:65] Loading cluster: ha-239000
	I0610 03:26:13.136321    6548 notify.go:220] Checking for updates...
	I0610 03:26:13.136527    6548 config.go:182] Loaded profile config "ha-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:26:13.136535    6548 status.go:255] checking status of ha-239000 ...
	I0610 03:26:13.136847    6548 status.go:330] ha-239000 host status = "Stopped" (err=<nil>)
	I0610 03:26:13.136852    6548 status.go:343] host is not running, skipping remaining checks
	I0610 03:26:13.136854    6548 status.go:257] ha-239000 status: &{Name:ha-239000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr: exit status 7 (74.687834ms)

                                                
                                                
-- stdout --
	ha-239000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:26:14.057391    6553 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:26:14.057656    6553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:14.057661    6553 out.go:304] Setting ErrFile to fd 2...
	I0610 03:26:14.057664    6553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:14.057839    6553 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:26:14.058014    6553 out.go:298] Setting JSON to false
	I0610 03:26:14.058030    6553 mustload.go:65] Loading cluster: ha-239000
	I0610 03:26:14.058076    6553 notify.go:220] Checking for updates...
	I0610 03:26:14.058323    6553 config.go:182] Loaded profile config "ha-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:26:14.058331    6553 status.go:255] checking status of ha-239000 ...
	I0610 03:26:14.058610    6553 status.go:330] ha-239000 host status = "Stopped" (err=<nil>)
	I0610 03:26:14.058615    6553 status.go:343] host is not running, skipping remaining checks
	I0610 03:26:14.058618    6553 status.go:257] ha-239000 status: &{Name:ha-239000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr: exit status 7 (74.88075ms)

                                                
                                                
-- stdout --
	ha-239000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:26:16.134809    6555 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:26:16.135032    6555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:16.135036    6555 out.go:304] Setting ErrFile to fd 2...
	I0610 03:26:16.135041    6555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:16.135189    6555 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:26:16.135338    6555 out.go:298] Setting JSON to false
	I0610 03:26:16.135351    6555 mustload.go:65] Loading cluster: ha-239000
	I0610 03:26:16.135391    6555 notify.go:220] Checking for updates...
	I0610 03:26:16.135611    6555 config.go:182] Loaded profile config "ha-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:26:16.135619    6555 status.go:255] checking status of ha-239000 ...
	I0610 03:26:16.135883    6555 status.go:330] ha-239000 host status = "Stopped" (err=<nil>)
	I0610 03:26:16.135888    6555 status.go:343] host is not running, skipping remaining checks
	I0610 03:26:16.135891    6555 status.go:257] ha-239000 status: &{Name:ha-239000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr: exit status 7 (77.038208ms)

                                                
                                                
-- stdout --
	ha-239000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:26:20.038689    6557 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:26:20.038916    6557 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:20.038921    6557 out.go:304] Setting ErrFile to fd 2...
	I0610 03:26:20.038924    6557 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:20.039091    6557 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:26:20.039253    6557 out.go:298] Setting JSON to false
	I0610 03:26:20.039267    6557 mustload.go:65] Loading cluster: ha-239000
	I0610 03:26:20.039310    6557 notify.go:220] Checking for updates...
	I0610 03:26:20.039539    6557 config.go:182] Loaded profile config "ha-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:26:20.039547    6557 status.go:255] checking status of ha-239000 ...
	I0610 03:26:20.039829    6557 status.go:330] ha-239000 host status = "Stopped" (err=<nil>)
	I0610 03:26:20.039834    6557 status.go:343] host is not running, skipping remaining checks
	I0610 03:26:20.039837    6557 status.go:257] ha-239000 status: &{Name:ha-239000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr: exit status 7 (74.944417ms)

                                                
                                                
-- stdout --
	ha-239000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:26:24.053411    6559 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:26:24.053614    6559 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:24.053619    6559 out.go:304] Setting ErrFile to fd 2...
	I0610 03:26:24.053622    6559 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:24.053826    6559 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:26:24.053991    6559 out.go:298] Setting JSON to false
	I0610 03:26:24.054004    6559 mustload.go:65] Loading cluster: ha-239000
	I0610 03:26:24.054035    6559 notify.go:220] Checking for updates...
	I0610 03:26:24.054279    6559 config.go:182] Loaded profile config "ha-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:26:24.054287    6559 status.go:255] checking status of ha-239000 ...
	I0610 03:26:24.054569    6559 status.go:330] ha-239000 host status = "Stopped" (err=<nil>)
	I0610 03:26:24.054575    6559 status.go:343] host is not running, skipping remaining checks
	I0610 03:26:24.054577    6559 status.go:257] ha-239000 status: &{Name:ha-239000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr: exit status 7 (74.499041ms)

                                                
                                                
-- stdout --
	ha-239000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:26:29.673379    6561 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:26:29.673587    6561 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:29.673591    6561 out.go:304] Setting ErrFile to fd 2...
	I0610 03:26:29.673594    6561 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:29.673765    6561 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:26:29.673955    6561 out.go:298] Setting JSON to false
	I0610 03:26:29.673970    6561 mustload.go:65] Loading cluster: ha-239000
	I0610 03:26:29.674003    6561 notify.go:220] Checking for updates...
	I0610 03:26:29.674211    6561 config.go:182] Loaded profile config "ha-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:26:29.674224    6561 status.go:255] checking status of ha-239000 ...
	I0610 03:26:29.674495    6561 status.go:330] ha-239000 host status = "Stopped" (err=<nil>)
	I0610 03:26:29.674500    6561 status.go:343] host is not running, skipping remaining checks
	I0610 03:26:29.674503    6561 status.go:257] ha-239000 status: &{Name:ha-239000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr: exit status 7 (73.4975ms)

                                                
                                                
-- stdout --
	ha-239000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:26:37.722065    6566 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:26:37.722283    6566 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:37.722288    6566 out.go:304] Setting ErrFile to fd 2...
	I0610 03:26:37.722291    6566 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:26:37.722483    6566 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:26:37.722637    6566 out.go:298] Setting JSON to false
	I0610 03:26:37.722650    6566 mustload.go:65] Loading cluster: ha-239000
	I0610 03:26:37.722699    6566 notify.go:220] Checking for updates...
	I0610 03:26:37.722926    6566 config.go:182] Loaded profile config "ha-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:26:37.722934    6566 status.go:255] checking status of ha-239000 ...
	I0610 03:26:37.723227    6566 status.go:330] ha-239000 host status = "Stopped" (err=<nil>)
	I0610 03:26:37.723232    6566 status.go:343] host is not running, skipping remaining checks
	I0610 03:26:37.723235    6566 status.go:257] ha-239000 status: &{Name:ha-239000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr: exit status 7 (74.640875ms)

                                                
                                                
-- stdout --
	ha-239000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:27:02.913135    6570 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:27:02.913379    6570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:27:02.913383    6570 out.go:304] Setting ErrFile to fd 2...
	I0610 03:27:02.913386    6570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:27:02.913568    6570 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:27:02.913746    6570 out.go:298] Setting JSON to false
	I0610 03:27:02.913762    6570 mustload.go:65] Loading cluster: ha-239000
	I0610 03:27:02.913803    6570 notify.go:220] Checking for updates...
	I0610 03:27:02.914027    6570 config.go:182] Loaded profile config "ha-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:27:02.914035    6570 status.go:255] checking status of ha-239000 ...
	I0610 03:27:02.914320    6570 status.go:330] ha-239000 host status = "Stopped" (err=<nil>)
	I0610 03:27:02.914325    6570 status.go:343] host is not running, skipping remaining checks
	I0610 03:27:02.914327    6570 status.go:257] ha-239000 status: &{Name:ha-239000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000: exit status 7 (32.375167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (50.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-239000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-239000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-239000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-239000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-239000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-239000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-239000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-239000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000: exit status 7 (30.095542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-239000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-239000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-239000 -v=7 --alsologtostderr: (2.057897708s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-239000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-239000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.224292583s)

                                                
                                                
-- stdout --
	* [ha-239000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-239000" primary control-plane node in "ha-239000" cluster
	* Restarting existing qemu2 VM for "ha-239000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-239000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:27:05.201232    6595 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:27:05.201402    6595 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:27:05.201406    6595 out.go:304] Setting ErrFile to fd 2...
	I0610 03:27:05.201409    6595 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:27:05.201591    6595 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:27:05.202843    6595 out.go:298] Setting JSON to false
	I0610 03:27:05.222413    6595 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5196,"bootTime":1718010029,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:27:05.222484    6595 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:27:05.227867    6595 out.go:177] * [ha-239000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:27:05.235860    6595 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:27:05.235924    6595 notify.go:220] Checking for updates...
	I0610 03:27:05.239781    6595 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:27:05.242884    6595 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:27:05.246007    6595 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:27:05.248819    6595 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:27:05.251849    6595 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:27:05.255167    6595 config.go:182] Loaded profile config "ha-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:27:05.255221    6595 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:27:05.259818    6595 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 03:27:05.266797    6595 start.go:297] selected driver: qemu2
	I0610 03:27:05.266805    6595 start.go:901] validating driver "qemu2" against &{Name:ha-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.1 ClusterName:ha-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:27:05.266884    6595 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:27:05.269359    6595 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:27:05.269406    6595 cni.go:84] Creating CNI manager for ""
	I0610 03:27:05.269411    6595 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 03:27:05.269462    6595 start.go:340] cluster config:
	{Name:ha-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-239000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:27:05.274154    6595 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:27:05.281802    6595 out.go:177] * Starting "ha-239000" primary control-plane node in "ha-239000" cluster
	I0610 03:27:05.284783    6595 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:27:05.284803    6595 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:27:05.284812    6595 cache.go:56] Caching tarball of preloaded images
	I0610 03:27:05.284867    6595 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:27:05.284872    6595 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:27:05.284935    6595 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/ha-239000/config.json ...
	I0610 03:27:05.285442    6595 start.go:360] acquireMachinesLock for ha-239000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:27:05.285482    6595 start.go:364] duration metric: took 33.542µs to acquireMachinesLock for "ha-239000"
	I0610 03:27:05.285491    6595 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:27:05.285497    6595 fix.go:54] fixHost starting: 
	I0610 03:27:05.285621    6595 fix.go:112] recreateIfNeeded on ha-239000: state=Stopped err=<nil>
	W0610 03:27:05.285629    6595 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:27:05.293623    6595 out.go:177] * Restarting existing qemu2 VM for "ha-239000" ...
	I0610 03:27:05.297849    6595 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:06:b3:bb:43:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/disk.qcow2
	I0610 03:27:05.300068    6595 main.go:141] libmachine: STDOUT: 
	I0610 03:27:05.300091    6595 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:27:05.300122    6595 fix.go:56] duration metric: took 14.624167ms for fixHost
	I0610 03:27:05.300127    6595 start.go:83] releasing machines lock for "ha-239000", held for 14.640416ms
	W0610 03:27:05.300135    6595 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:27:05.300185    6595 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:27:05.300191    6595 start.go:728] Will try again in 5 seconds ...
	I0610 03:27:10.302276    6595 start.go:360] acquireMachinesLock for ha-239000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:27:10.302661    6595 start.go:364] duration metric: took 290.375µs to acquireMachinesLock for "ha-239000"
	I0610 03:27:10.302809    6595 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:27:10.302832    6595 fix.go:54] fixHost starting: 
	I0610 03:27:10.303589    6595 fix.go:112] recreateIfNeeded on ha-239000: state=Stopped err=<nil>
	W0610 03:27:10.303616    6595 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:27:10.312062    6595 out.go:177] * Restarting existing qemu2 VM for "ha-239000" ...
	I0610 03:27:10.316247    6595 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:06:b3:bb:43:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/disk.qcow2
	I0610 03:27:10.325821    6595 main.go:141] libmachine: STDOUT: 
	I0610 03:27:10.325880    6595 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:27:10.325955    6595 fix.go:56] duration metric: took 23.128792ms for fixHost
	I0610 03:27:10.325971    6595 start.go:83] releasing machines lock for "ha-239000", held for 23.264917ms
	W0610 03:27:10.326153    6595 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-239000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-239000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:27:10.333103    6595 out.go:177] 
	W0610 03:27:10.336166    6595 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:27:10.336209    6595 out.go:239] * 
	* 
	W0610 03:27:10.338659    6595 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:27:10.348077    6595 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-239000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-239000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000: exit status 7 (33.516916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-239000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-239000 node delete m03 -v=7 --alsologtostderr: exit status 83 (38.955333ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-239000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-239000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:27:10.492894    6607 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:27:10.493293    6607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:27:10.493297    6607 out.go:304] Setting ErrFile to fd 2...
	I0610 03:27:10.493299    6607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:27:10.493433    6607 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:27:10.493650    6607 mustload.go:65] Loading cluster: ha-239000
	I0610 03:27:10.493848    6607 config.go:182] Loaded profile config "ha-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:27:10.498656    6607 out.go:177] * The control-plane node ha-239000 host is not running: state=Stopped
	I0610 03:27:10.499875    6607 out.go:177]   To start a cluster, run: "minikube start -p ha-239000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-239000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr: exit status 7 (29.461958ms)

                                                
                                                
-- stdout --
	ha-239000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:27:10.531262    6609 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:27:10.531425    6609 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:27:10.531428    6609 out.go:304] Setting ErrFile to fd 2...
	I0610 03:27:10.531430    6609 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:27:10.531564    6609 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:27:10.531684    6609 out.go:298] Setting JSON to false
	I0610 03:27:10.531693    6609 mustload.go:65] Loading cluster: ha-239000
	I0610 03:27:10.531740    6609 notify.go:220] Checking for updates...
	I0610 03:27:10.531892    6609 config.go:182] Loaded profile config "ha-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:27:10.531898    6609 status.go:255] checking status of ha-239000 ...
	I0610 03:27:10.532118    6609 status.go:330] ha-239000 host status = "Stopped" (err=<nil>)
	I0610 03:27:10.532122    6609 status.go:343] host is not running, skipping remaining checks
	I0610 03:27:10.532124    6609 status.go:257] ha-239000 status: &{Name:ha-239000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000: exit status 7 (29.691792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-239000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-239000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-239000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-239000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000: exit status 7 (29.182292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (1.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-239000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-239000 stop -v=7 --alsologtostderr: (1.846187458s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr: exit status 7 (72.349458ms)

                                                
                                                
-- stdout --
	ha-239000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:27:12.577447    6629 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:27:12.577945    6629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:27:12.577951    6629 out.go:304] Setting ErrFile to fd 2...
	I0610 03:27:12.577954    6629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:27:12.578207    6629 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:27:12.578410    6629 out.go:298] Setting JSON to false
	I0610 03:27:12.578429    6629 mustload.go:65] Loading cluster: ha-239000
	I0610 03:27:12.578570    6629 notify.go:220] Checking for updates...
	I0610 03:27:12.579067    6629 config.go:182] Loaded profile config "ha-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:27:12.579079    6629 status.go:255] checking status of ha-239000 ...
	I0610 03:27:12.579348    6629 status.go:330] ha-239000 host status = "Stopped" (err=<nil>)
	I0610 03:27:12.579354    6629 status.go:343] host is not running, skipping remaining checks
	I0610 03:27:12.579357    6629 status.go:257] ha-239000 status: &{Name:ha-239000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr": ha-239000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr": ha-239000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-239000 status -v=7 --alsologtostderr": ha-239000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000: exit status 7 (32.31325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (1.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-239000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-239000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.183183166s)

                                                
                                                
-- stdout --
	* [ha-239000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-239000" primary control-plane node in "ha-239000" cluster
	* Restarting existing qemu2 VM for "ha-239000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-239000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:27:12.641216    6633 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:27:12.641348    6633 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:27:12.641354    6633 out.go:304] Setting ErrFile to fd 2...
	I0610 03:27:12.641357    6633 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:27:12.641475    6633 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:27:12.642421    6633 out.go:298] Setting JSON to false
	I0610 03:27:12.658596    6633 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5203,"bootTime":1718010029,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:27:12.658657    6633 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:27:12.663554    6633 out.go:177] * [ha-239000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:27:12.670486    6633 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:27:12.670525    6633 notify.go:220] Checking for updates...
	I0610 03:27:12.678498    6633 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:27:12.681478    6633 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:27:12.684492    6633 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:27:12.687494    6633 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:27:12.690430    6633 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:27:12.693707    6633 config.go:182] Loaded profile config "ha-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:27:12.693991    6633 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:27:12.698478    6633 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 03:27:12.705589    6633 start.go:297] selected driver: qemu2
	I0610 03:27:12.705595    6633 start.go:901] validating driver "qemu2" against &{Name:ha-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.1 ClusterName:ha-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:27:12.705650    6633 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:27:12.707938    6633 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:27:12.707977    6633 cni.go:84] Creating CNI manager for ""
	I0610 03:27:12.707982    6633 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 03:27:12.708029    6633 start.go:340] cluster config:
	{Name:ha-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-239000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:27:12.712315    6633 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:27:12.717456    6633 out.go:177] * Starting "ha-239000" primary control-plane node in "ha-239000" cluster
	I0610 03:27:12.721418    6633 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:27:12.721431    6633 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:27:12.721441    6633 cache.go:56] Caching tarball of preloaded images
	I0610 03:27:12.721493    6633 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:27:12.721499    6633 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:27:12.721552    6633 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/ha-239000/config.json ...
	I0610 03:27:12.722018    6633 start.go:360] acquireMachinesLock for ha-239000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:27:12.722046    6633 start.go:364] duration metric: took 22.083µs to acquireMachinesLock for "ha-239000"
	I0610 03:27:12.722055    6633 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:27:12.722061    6633 fix.go:54] fixHost starting: 
	I0610 03:27:12.722178    6633 fix.go:112] recreateIfNeeded on ha-239000: state=Stopped err=<nil>
	W0610 03:27:12.722187    6633 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:27:12.730417    6633 out.go:177] * Restarting existing qemu2 VM for "ha-239000" ...
	I0610 03:27:12.734318    6633 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:06:b3:bb:43:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/disk.qcow2
	I0610 03:27:12.736266    6633 main.go:141] libmachine: STDOUT: 
	I0610 03:27:12.736286    6633 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:27:12.736316    6633 fix.go:56] duration metric: took 14.25375ms for fixHost
	I0610 03:27:12.736329    6633 start.go:83] releasing machines lock for "ha-239000", held for 14.269292ms
	W0610 03:27:12.736338    6633 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:27:12.736372    6633 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:27:12.736377    6633 start.go:728] Will try again in 5 seconds ...
	I0610 03:27:17.738517    6633 start.go:360] acquireMachinesLock for ha-239000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:27:17.738896    6633 start.go:364] duration metric: took 289.875µs to acquireMachinesLock for "ha-239000"
	I0610 03:27:17.739015    6633 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:27:17.739038    6633 fix.go:54] fixHost starting: 
	I0610 03:27:17.739777    6633 fix.go:112] recreateIfNeeded on ha-239000: state=Stopped err=<nil>
	W0610 03:27:17.739805    6633 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:27:17.748198    6633 out.go:177] * Restarting existing qemu2 VM for "ha-239000" ...
	I0610 03:27:17.752356    6633 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:06:b3:bb:43:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/ha-239000/disk.qcow2
	I0610 03:27:17.762105    6633 main.go:141] libmachine: STDOUT: 
	I0610 03:27:17.762171    6633 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:27:17.762245    6633 fix.go:56] duration metric: took 23.211083ms for fixHost
	I0610 03:27:17.762259    6633 start.go:83] releasing machines lock for "ha-239000", held for 23.341208ms
	W0610 03:27:17.762433    6633 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-239000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-239000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:27:17.770087    6633 out.go:177] 
	W0610 03:27:17.773303    6633 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:27:17.773329    6633 out.go:239] * 
	* 
	W0610 03:27:17.776301    6633 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:27:17.784256    6633 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-239000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000: exit status 7 (69.546792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-239000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-239000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-239000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-239000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000: exit status 7 (29.554708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-239000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-239000 --control-plane -v=7 --alsologtostderr: exit status 83 (40.70125ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-239000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-239000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:27:17.998312    6649 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:27:17.998469    6649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:27:17.998472    6649 out.go:304] Setting ErrFile to fd 2...
	I0610 03:27:17.998474    6649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:27:17.998607    6649 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:27:17.998847    6649 mustload.go:65] Loading cluster: ha-239000
	I0610 03:27:17.999053    6649 config.go:182] Loaded profile config "ha-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:27:18.003096    6649 out.go:177] * The control-plane node ha-239000 host is not running: state=Stopped
	I0610 03:27:18.006949    6649 out.go:177]   To start a cluster, run: "minikube start -p ha-239000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-239000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000: exit status 7 (29.897666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-239000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-239000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-239000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-239000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-239000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-239000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-239000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-239000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-239000 -n ha-239000: exit status 7 (29.505291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.10s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-587000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-587000 --driver=qemu2 : exit status 80 (9.935514375s)

                                                
                                                
-- stdout --
	* [image-587000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-587000" primary control-plane node in "image-587000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-587000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-587000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-587000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-587000 -n image-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-587000 -n image-587000: exit status 7 (67.217666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-587000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.00s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-997000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-997000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.786357584s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5ef611da-8002-46b9-ab78-9c16f1e35e85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-997000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b0ad0aac-7041-4ecb-b7b1-da0be4795c5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19046"}}
	{"specversion":"1.0","id":"32a87bdc-b9b1-4d17-9ea8-bfdfa0b1534b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig"}}
	{"specversion":"1.0","id":"b3a81640-6326-4e8e-b08d-e5fb8d2c0718","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4159de5c-dec5-412c-b562-200c5d8673d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2244ad3f-b907-4e2c-b1eb-a88a6a1a11fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube"}}
	{"specversion":"1.0","id":"f0283e83-207e-4f55-9483-428c2ed8cea0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0f0d0918-c71f-4699-8590-37c933d6e585","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"39a01c08-4400-4d42-8b25-978f752bf74b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"5fe9eec6-8daf-45ca-813f-e2d41872f056","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-997000\" primary control-plane node in \"json-output-997000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3c13c4f3-f938-4f5a-948d-7845a64e9ed9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"0c4dec14-a21c-4625-a346-7d732290c5b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-997000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"5cb290a7-185b-4190-b4b4-f1768476e920","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"20bd6830-e387-4399-a136-9d73ed89b650","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"2902dc81-596a-41c5-acc1-1e1fa37c802f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-997000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"4689dbe4-d5de-4076-8456-9a47e8900c96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"7c89af1f-c569-4526-970b-1cb413868578","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-997000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-997000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-997000 --output=json --user=testUser: exit status 83 (77.539334ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"451e2f3f-c651-4649-92f9-cb27b8c069e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-997000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"73ce1ee3-3df9-4861-abab-a78d286b92e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-997000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-997000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-997000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-997000 --output=json --user=testUser: exit status 83 (44.428042ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-997000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-997000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-997000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-997000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.35s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-773000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-773000 --driver=qemu2 : exit status 80 (9.916924541s)

                                                
                                                
-- stdout --
	* [first-773000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-773000" primary control-plane node in "first-773000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-773000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-773000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-773000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-06-10 03:27:52.242128 -0700 PDT m=+492.874973667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-775000 -n second-775000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-775000 -n second-775000: exit status 85 (76.613208ms)

                                                
                                                
-- stdout --
	* Profile "second-775000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-775000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-775000" host is not running, skipping log retrieval (state="* Profile \"second-775000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-775000\"")
helpers_test.go:175: Cleaning up "second-775000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-775000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-06-10 03:27:52.545801 -0700 PDT m=+493.178652126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-773000 -n first-773000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-773000 -n first-773000: exit status 7 (29.090167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-773000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-773000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-773000
--- FAIL: TestMinikubeProfile (10.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-850000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-850000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.057759667s)

                                                
                                                
-- stdout --
	* [mount-start-1-850000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-850000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-850000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-850000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-850000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-850000 -n mount-start-1-850000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-850000 -n mount-start-1-850000: exit status 7 (68.286833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-850000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.13s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-763000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-763000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.853723417s)

                                                
                                                
-- stdout --
	* [multinode-763000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-763000" primary control-plane node in "multinode-763000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-763000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:28:03.154613    6816 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:28:03.154749    6816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:28:03.154752    6816 out.go:304] Setting ErrFile to fd 2...
	I0610 03:28:03.154754    6816 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:28:03.154883    6816 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:28:03.155945    6816 out.go:298] Setting JSON to false
	I0610 03:28:03.172271    6816 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5254,"bootTime":1718010029,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:28:03.172337    6816 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:28:03.178976    6816 out.go:177] * [multinode-763000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:28:03.185928    6816 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:28:03.185967    6816 notify.go:220] Checking for updates...
	I0610 03:28:03.189975    6816 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:28:03.192984    6816 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:28:03.195963    6816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:28:03.198984    6816 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:28:03.201970    6816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:28:03.205038    6816 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:28:03.208895    6816 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:28:03.214810    6816 start.go:297] selected driver: qemu2
	I0610 03:28:03.214816    6816 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:28:03.214820    6816 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:28:03.216942    6816 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:28:03.219891    6816 out.go:177] * Automatically selected the socket_vmnet network
	I0610 03:28:03.223020    6816 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:28:03.223051    6816 cni.go:84] Creating CNI manager for ""
	I0610 03:28:03.223055    6816 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0610 03:28:03.223059    6816 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 03:28:03.223097    6816 start.go:340] cluster config:
	{Name:multinode-763000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:28:03.227537    6816 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:28:03.234964    6816 out.go:177] * Starting "multinode-763000" primary control-plane node in "multinode-763000" cluster
	I0610 03:28:03.238952    6816 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:28:03.238967    6816 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:28:03.238975    6816 cache.go:56] Caching tarball of preloaded images
	I0610 03:28:03.239047    6816 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:28:03.239053    6816 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:28:03.239298    6816 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/multinode-763000/config.json ...
	I0610 03:28:03.239311    6816 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/multinode-763000/config.json: {Name:mk525bba3179db6dfe3fb5a6a6b5f63f7a0339bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:28:03.239539    6816 start.go:360] acquireMachinesLock for multinode-763000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:28:03.239575    6816 start.go:364] duration metric: took 30.333µs to acquireMachinesLock for "multinode-763000"
	I0610 03:28:03.239586    6816 start.go:93] Provisioning new machine with config: &{Name:multinode-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:multinode-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:28:03.239623    6816 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:28:03.247946    6816 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 03:28:03.266151    6816 start.go:159] libmachine.API.Create for "multinode-763000" (driver="qemu2")
	I0610 03:28:03.266176    6816 client.go:168] LocalClient.Create starting
	I0610 03:28:03.266247    6816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:28:03.266281    6816 main.go:141] libmachine: Decoding PEM data...
	I0610 03:28:03.266296    6816 main.go:141] libmachine: Parsing certificate...
	I0610 03:28:03.266336    6816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:28:03.266361    6816 main.go:141] libmachine: Decoding PEM data...
	I0610 03:28:03.266377    6816 main.go:141] libmachine: Parsing certificate...
	I0610 03:28:03.266775    6816 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:28:03.410387    6816 main.go:141] libmachine: Creating SSH key...
	I0610 03:28:03.511575    6816 main.go:141] libmachine: Creating Disk image...
	I0610 03:28:03.511581    6816 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:28:03.511755    6816 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/disk.qcow2
	I0610 03:28:03.524561    6816 main.go:141] libmachine: STDOUT: 
	I0610 03:28:03.524582    6816 main.go:141] libmachine: STDERR: 
	I0610 03:28:03.524645    6816 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/disk.qcow2 +20000M
	I0610 03:28:03.535408    6816 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:28:03.535442    6816 main.go:141] libmachine: STDERR: 
	I0610 03:28:03.535453    6816 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/disk.qcow2
	I0610 03:28:03.535458    6816 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:28:03.535491    6816 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:4e:57:17:81:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/disk.qcow2
	I0610 03:28:03.537197    6816 main.go:141] libmachine: STDOUT: 
	I0610 03:28:03.537214    6816 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:28:03.537233    6816 client.go:171] duration metric: took 271.056792ms to LocalClient.Create
	I0610 03:28:05.539389    6816 start.go:128] duration metric: took 2.299779584s to createHost
	I0610 03:28:05.539492    6816 start.go:83] releasing machines lock for "multinode-763000", held for 2.299942542s
	W0610 03:28:05.539566    6816 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:28:05.549797    6816 out.go:177] * Deleting "multinode-763000" in qemu2 ...
	W0610 03:28:05.579772    6816 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:28:05.579799    6816 start.go:728] Will try again in 5 seconds ...
	I0610 03:28:10.581958    6816 start.go:360] acquireMachinesLock for multinode-763000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:28:10.582467    6816 start.go:364] duration metric: took 402.333µs to acquireMachinesLock for "multinode-763000"
	I0610 03:28:10.582609    6816 start.go:93] Provisioning new machine with config: &{Name:multinode-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:multinode-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:28:10.582930    6816 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:28:10.599734    6816 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 03:28:10.649633    6816 start.go:159] libmachine.API.Create for "multinode-763000" (driver="qemu2")
	I0610 03:28:10.649677    6816 client.go:168] LocalClient.Create starting
	I0610 03:28:10.649785    6816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:28:10.649842    6816 main.go:141] libmachine: Decoding PEM data...
	I0610 03:28:10.649860    6816 main.go:141] libmachine: Parsing certificate...
	I0610 03:28:10.649935    6816 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:28:10.649979    6816 main.go:141] libmachine: Decoding PEM data...
	I0610 03:28:10.649999    6816 main.go:141] libmachine: Parsing certificate...
	I0610 03:28:10.650651    6816 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:28:10.803195    6816 main.go:141] libmachine: Creating SSH key...
	I0610 03:28:10.909870    6816 main.go:141] libmachine: Creating Disk image...
	I0610 03:28:10.909878    6816 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:28:10.910066    6816 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/disk.qcow2
	I0610 03:28:10.922480    6816 main.go:141] libmachine: STDOUT: 
	I0610 03:28:10.922502    6816 main.go:141] libmachine: STDERR: 
	I0610 03:28:10.922558    6816 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/disk.qcow2 +20000M
	I0610 03:28:10.933490    6816 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:28:10.933520    6816 main.go:141] libmachine: STDERR: 
	I0610 03:28:10.933531    6816 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/disk.qcow2
	I0610 03:28:10.933537    6816 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:28:10.933573    6816 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:e5:88:e3:de:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/disk.qcow2
	I0610 03:28:10.935389    6816 main.go:141] libmachine: STDOUT: 
	I0610 03:28:10.935403    6816 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:28:10.935418    6816 client.go:171] duration metric: took 285.739375ms to LocalClient.Create
	I0610 03:28:12.937563    6816 start.go:128] duration metric: took 2.354637333s to createHost
	I0610 03:28:12.937629    6816 start.go:83] releasing machines lock for "multinode-763000", held for 2.355171125s
	W0610 03:28:12.938025    6816 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-763000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-763000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:28:12.951767    6816 out.go:177] 
	W0610 03:28:12.954918    6816 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:28:12.954950    6816 out.go:239] * 
	* 
	W0610 03:28:12.957741    6816 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:28:12.964741    6816 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-763000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000: exit status 7 (66.779375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (109.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-763000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-763000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (59.733584ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-763000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-763000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-763000 -- rollout status deployment/busybox: exit status 1 (56.290583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-763000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.466625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-763000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.873625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-763000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.882459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-763000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.796291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-763000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.6915ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-763000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.451667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-763000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.081542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-763000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.533834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-763000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.961834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-763000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.054ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-763000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.446084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-763000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.720542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-763000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-763000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-763000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.422834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-763000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-763000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-763000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.064166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-763000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-763000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-763000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.6755ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-763000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000: exit status 7 (29.734292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (109.41s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-763000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.978375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-763000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000: exit status 7 (30.012625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-763000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-763000 -v 3 --alsologtostderr: exit status 83 (41.928167ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-763000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-763000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:30:02.574645    6924 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:30:02.574834    6924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:02.574841    6924 out.go:304] Setting ErrFile to fd 2...
	I0610 03:30:02.574844    6924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:02.574984    6924 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:30:02.575220    6924 mustload.go:65] Loading cluster: multinode-763000
	I0610 03:30:02.575433    6924 config.go:182] Loaded profile config "multinode-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:30:02.579327    6924 out.go:177] * The control-plane node multinode-763000 host is not running: state=Stopped
	I0610 03:30:02.583149    6924 out.go:177]   To start a cluster, run: "minikube start -p multinode-763000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-763000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000: exit status 7 (29.29625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-763000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-763000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.368916ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-763000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-763000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-763000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000: exit status 7 (30.227208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-763000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-763000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-763000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"multinode-763000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000: exit status 7 (30.2615ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-763000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-763000 status --output json --alsologtostderr: exit status 7 (30.0425ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-763000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:30:02.805119    6937 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:30:02.805258    6937 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:02.805261    6937 out.go:304] Setting ErrFile to fd 2...
	I0610 03:30:02.805264    6937 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:02.805416    6937 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:30:02.805526    6937 out.go:298] Setting JSON to true
	I0610 03:30:02.805536    6937 mustload.go:65] Loading cluster: multinode-763000
	I0610 03:30:02.805595    6937 notify.go:220] Checking for updates...
	I0610 03:30:02.805738    6937 config.go:182] Loaded profile config "multinode-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:30:02.805744    6937 status.go:255] checking status of multinode-763000 ...
	I0610 03:30:02.805952    6937 status.go:330] multinode-763000 host status = "Stopped" (err=<nil>)
	I0610 03:30:02.805956    6937 status.go:343] host is not running, skipping remaining checks
	I0610 03:30:02.805958    6937 status.go:257] multinode-763000 status: &{Name:multinode-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-763000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000: exit status 7 (29.34475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-763000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-763000 node stop m03: exit status 85 (46.3405ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-763000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-763000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-763000 status: exit status 7 (29.673208ms)

                                                
                                                
-- stdout --
	multinode-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-763000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-763000 status --alsologtostderr: exit status 7 (30.341375ms)

                                                
                                                
-- stdout --
	multinode-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:30:02.940797    6945 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:30:02.940939    6945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:02.940942    6945 out.go:304] Setting ErrFile to fd 2...
	I0610 03:30:02.940944    6945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:02.941082    6945 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:30:02.941210    6945 out.go:298] Setting JSON to false
	I0610 03:30:02.941219    6945 mustload.go:65] Loading cluster: multinode-763000
	I0610 03:30:02.941268    6945 notify.go:220] Checking for updates...
	I0610 03:30:02.941433    6945 config.go:182] Loaded profile config "multinode-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:30:02.941440    6945 status.go:255] checking status of multinode-763000 ...
	I0610 03:30:02.942434    6945 status.go:330] multinode-763000 host status = "Stopped" (err=<nil>)
	I0610 03:30:02.942441    6945 status.go:343] host is not running, skipping remaining checks
	I0610 03:30:02.942444    6945 status.go:257] multinode-763000 status: &{Name:multinode-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-763000 status --alsologtostderr": multinode-763000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000: exit status 7 (30.166958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (49.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-763000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-763000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.298042ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:30:03.001503    6949 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:30:03.002109    6949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:03.002113    6949 out.go:304] Setting ErrFile to fd 2...
	I0610 03:30:03.002115    6949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:03.002443    6949 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:30:03.002834    6949 mustload.go:65] Loading cluster: multinode-763000
	I0610 03:30:03.003018    6949 config.go:182] Loaded profile config "multinode-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:30:03.007381    6949 out.go:177] 
	W0610 03:30:03.010368    6949 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0610 03:30:03.010373    6949 out.go:239] * 
	* 
	W0610 03:30:03.012314    6949 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:30:03.016304    6949 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0610 03:30:03.001503    6949 out.go:291] Setting OutFile to fd 1 ...
I0610 03:30:03.002109    6949 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 03:30:03.002113    6949 out.go:304] Setting ErrFile to fd 2...
I0610 03:30:03.002115    6949 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 03:30:03.002443    6949 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
I0610 03:30:03.002834    6949 mustload.go:65] Loading cluster: multinode-763000
I0610 03:30:03.003018    6949 config.go:182] Loaded profile config "multinode-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0610 03:30:03.007381    6949 out.go:177] 
W0610 03:30:03.010368    6949 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0610 03:30:03.010373    6949 out.go:239] * 
* 
W0610 03:30:03.012314    6949 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0610 03:30:03.016304    6949 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-763000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-763000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-763000 status -v=7 --alsologtostderr: exit status 7 (30.473709ms)

                                                
                                                
-- stdout --
	multinode-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:30:03.049636    6951 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:30:03.049794    6951 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:03.049797    6951 out.go:304] Setting ErrFile to fd 2...
	I0610 03:30:03.049799    6951 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:03.049942    6951 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:30:03.050062    6951 out.go:298] Setting JSON to false
	I0610 03:30:03.050072    6951 mustload.go:65] Loading cluster: multinode-763000
	I0610 03:30:03.050134    6951 notify.go:220] Checking for updates...
	I0610 03:30:03.050261    6951 config.go:182] Loaded profile config "multinode-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:30:03.050267    6951 status.go:255] checking status of multinode-763000 ...
	I0610 03:30:03.050872    6951 status.go:330] multinode-763000 host status = "Stopped" (err=<nil>)
	I0610 03:30:03.050878    6951 status.go:343] host is not running, skipping remaining checks
	I0610 03:30:03.050881    6951 status.go:257] multinode-763000 status: &{Name:multinode-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-763000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-763000 status -v=7 --alsologtostderr: exit status 7 (74.463125ms)

                                                
                                                
-- stdout --
	multinode-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:30:03.966468    6953 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:30:03.966663    6953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:03.966667    6953 out.go:304] Setting ErrFile to fd 2...
	I0610 03:30:03.966670    6953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:03.966842    6953 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:30:03.967003    6953 out.go:298] Setting JSON to false
	I0610 03:30:03.967016    6953 mustload.go:65] Loading cluster: multinode-763000
	I0610 03:30:03.967049    6953 notify.go:220] Checking for updates...
	I0610 03:30:03.967293    6953 config.go:182] Loaded profile config "multinode-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:30:03.967301    6953 status.go:255] checking status of multinode-763000 ...
	I0610 03:30:03.967577    6953 status.go:330] multinode-763000 host status = "Stopped" (err=<nil>)
	I0610 03:30:03.967581    6953 status.go:343] host is not running, skipping remaining checks
	I0610 03:30:03.967584    6953 status.go:257] multinode-763000 status: &{Name:multinode-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-763000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-763000 status -v=7 --alsologtostderr: exit status 7 (73.964583ms)

                                                
                                                
-- stdout --
	multinode-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:30:05.222325    6955 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:30:05.222530    6955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:05.222535    6955 out.go:304] Setting ErrFile to fd 2...
	I0610 03:30:05.222538    6955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:05.222710    6955 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:30:05.222882    6955 out.go:298] Setting JSON to false
	I0610 03:30:05.222895    6955 mustload.go:65] Loading cluster: multinode-763000
	I0610 03:30:05.222931    6955 notify.go:220] Checking for updates...
	I0610 03:30:05.223158    6955 config.go:182] Loaded profile config "multinode-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:30:05.223167    6955 status.go:255] checking status of multinode-763000 ...
	I0610 03:30:05.223454    6955 status.go:330] multinode-763000 host status = "Stopped" (err=<nil>)
	I0610 03:30:05.223458    6955 status.go:343] host is not running, skipping remaining checks
	I0610 03:30:05.223462    6955 status.go:257] multinode-763000 status: &{Name:multinode-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-763000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-763000 status -v=7 --alsologtostderr: exit status 7 (72.39775ms)

                                                
                                                
-- stdout --
	multinode-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:30:07.251192    6958 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:30:07.251367    6958 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:07.251371    6958 out.go:304] Setting ErrFile to fd 2...
	I0610 03:30:07.251374    6958 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:07.251527    6958 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:30:07.251675    6958 out.go:298] Setting JSON to false
	I0610 03:30:07.251686    6958 mustload.go:65] Loading cluster: multinode-763000
	I0610 03:30:07.251729    6958 notify.go:220] Checking for updates...
	I0610 03:30:07.251958    6958 config.go:182] Loaded profile config "multinode-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:30:07.251966    6958 status.go:255] checking status of multinode-763000 ...
	I0610 03:30:07.252237    6958 status.go:330] multinode-763000 host status = "Stopped" (err=<nil>)
	I0610 03:30:07.252242    6958 status.go:343] host is not running, skipping remaining checks
	I0610 03:30:07.252245    6958 status.go:257] multinode-763000 status: &{Name:multinode-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-763000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-763000 status -v=7 --alsologtostderr: exit status 7 (73.360833ms)

                                                
                                                
-- stdout --
	multinode-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:30:09.278510    6960 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:30:09.278728    6960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:09.278732    6960 out.go:304] Setting ErrFile to fd 2...
	I0610 03:30:09.278735    6960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:09.278906    6960 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:30:09.279071    6960 out.go:298] Setting JSON to false
	I0610 03:30:09.279084    6960 mustload.go:65] Loading cluster: multinode-763000
	I0610 03:30:09.279124    6960 notify.go:220] Checking for updates...
	I0610 03:30:09.279376    6960 config.go:182] Loaded profile config "multinode-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:30:09.279384    6960 status.go:255] checking status of multinode-763000 ...
	I0610 03:30:09.279675    6960 status.go:330] multinode-763000 host status = "Stopped" (err=<nil>)
	I0610 03:30:09.279680    6960 status.go:343] host is not running, skipping remaining checks
	I0610 03:30:09.279683    6960 status.go:257] multinode-763000 status: &{Name:multinode-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-763000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-763000 status -v=7 --alsologtostderr: exit status 7 (73.132666ms)

                                                
                                                
-- stdout --
	multinode-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:30:15.568954    6967 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:30:15.569147    6967 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:15.569151    6967 out.go:304] Setting ErrFile to fd 2...
	I0610 03:30:15.569154    6967 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:15.569310    6967 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:30:15.569462    6967 out.go:298] Setting JSON to false
	I0610 03:30:15.569484    6967 mustload.go:65] Loading cluster: multinode-763000
	I0610 03:30:15.569515    6967 notify.go:220] Checking for updates...
	I0610 03:30:15.569739    6967 config.go:182] Loaded profile config "multinode-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:30:15.569747    6967 status.go:255] checking status of multinode-763000 ...
	I0610 03:30:15.570009    6967 status.go:330] multinode-763000 host status = "Stopped" (err=<nil>)
	I0610 03:30:15.570014    6967 status.go:343] host is not running, skipping remaining checks
	I0610 03:30:15.570016    6967 status.go:257] multinode-763000 status: &{Name:multinode-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-763000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-763000 status -v=7 --alsologtostderr: exit status 7 (71.650042ms)

                                                
                                                
-- stdout --
	multinode-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:30:21.629848    6969 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:30:21.630050    6969 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:21.630055    6969 out.go:304] Setting ErrFile to fd 2...
	I0610 03:30:21.630058    6969 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:21.630218    6969 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:30:21.630367    6969 out.go:298] Setting JSON to false
	I0610 03:30:21.630380    6969 mustload.go:65] Loading cluster: multinode-763000
	I0610 03:30:21.630411    6969 notify.go:220] Checking for updates...
	I0610 03:30:21.630614    6969 config.go:182] Loaded profile config "multinode-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:30:21.630621    6969 status.go:255] checking status of multinode-763000 ...
	I0610 03:30:21.630886    6969 status.go:330] multinode-763000 host status = "Stopped" (err=<nil>)
	I0610 03:30:21.630891    6969 status.go:343] host is not running, skipping remaining checks
	I0610 03:30:21.630894    6969 status.go:257] multinode-763000 status: &{Name:multinode-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-763000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-763000 status -v=7 --alsologtostderr: exit status 7 (75.794959ms)

                                                
                                                
-- stdout --
	multinode-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:30:38.702383    6971 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:30:38.702838    6971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:38.702844    6971 out.go:304] Setting ErrFile to fd 2...
	I0610 03:30:38.702848    6971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:38.703136    6971 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:30:38.703360    6971 out.go:298] Setting JSON to false
	I0610 03:30:38.703374    6971 mustload.go:65] Loading cluster: multinode-763000
	I0610 03:30:38.703571    6971 notify.go:220] Checking for updates...
	I0610 03:30:38.704067    6971 config.go:182] Loaded profile config "multinode-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:30:38.704081    6971 status.go:255] checking status of multinode-763000 ...
	I0610 03:30:38.704346    6971 status.go:330] multinode-763000 host status = "Stopped" (err=<nil>)
	I0610 03:30:38.704352    6971 status.go:343] host is not running, skipping remaining checks
	I0610 03:30:38.704355    6971 status.go:257] multinode-763000 status: &{Name:multinode-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-763000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-763000 status -v=7 --alsologtostderr: exit status 7 (74.3995ms)

                                                
                                                
-- stdout --
	multinode-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:30:52.158665    6975 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:30:52.158862    6975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:52.158866    6975 out.go:304] Setting ErrFile to fd 2...
	I0610 03:30:52.158869    6975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:52.159056    6975 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:30:52.159225    6975 out.go:298] Setting JSON to false
	I0610 03:30:52.159240    6975 mustload.go:65] Loading cluster: multinode-763000
	I0610 03:30:52.159274    6975 notify.go:220] Checking for updates...
	I0610 03:30:52.159506    6975 config.go:182] Loaded profile config "multinode-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:30:52.159514    6975 status.go:255] checking status of multinode-763000 ...
	I0610 03:30:52.159817    6975 status.go:330] multinode-763000 host status = "Stopped" (err=<nil>)
	I0610 03:30:52.159822    6975 status.go:343] host is not running, skipping remaining checks
	I0610 03:30:52.159825    6975 status.go:257] multinode-763000 status: &{Name:multinode-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-763000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000: exit status 7 (33.182875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (49.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-763000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-763000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-763000: (3.172648416s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-763000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-763000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.231247542s)

                                                
                                                
-- stdout --
	* [multinode-763000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-763000" primary control-plane node in "multinode-763000" cluster
	* Restarting existing qemu2 VM for "multinode-763000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-763000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:30:55.462771    6999 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:30:55.463223    6999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:55.463229    6999 out.go:304] Setting ErrFile to fd 2...
	I0610 03:30:55.463232    6999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:30:55.463496    6999 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:30:55.465268    6999 out.go:298] Setting JSON to false
	I0610 03:30:55.485182    6999 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5426,"bootTime":1718010029,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:30:55.485257    6999 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:30:55.489343    6999 out.go:177] * [multinode-763000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:30:55.497305    6999 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:30:55.501217    6999 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:30:55.497371    6999 notify.go:220] Checking for updates...
	I0610 03:30:55.507185    6999 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:30:55.510183    6999 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:30:55.513204    6999 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:30:55.520208    6999 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:30:55.524553    6999 config.go:182] Loaded profile config "multinode-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:30:55.524638    6999 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:30:55.529173    6999 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 03:30:55.536228    6999 start.go:297] selected driver: qemu2
	I0610 03:30:55.536236    6999 start.go:901] validating driver "qemu2" against &{Name:multinode-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:multinode-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:30:55.536326    6999 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:30:55.538918    6999 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:30:55.538974    6999 cni.go:84] Creating CNI manager for ""
	I0610 03:30:55.538979    6999 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 03:30:55.539033    6999 start.go:340] cluster config:
	{Name:multinode-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-763000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:30:55.543965    6999 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:30:55.551219    6999 out.go:177] * Starting "multinode-763000" primary control-plane node in "multinode-763000" cluster
	I0610 03:30:55.555179    6999 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:30:55.555196    6999 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:30:55.555208    6999 cache.go:56] Caching tarball of preloaded images
	I0610 03:30:55.555282    6999 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:30:55.555289    6999 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:30:55.555363    6999 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/multinode-763000/config.json ...
	I0610 03:30:55.555873    6999 start.go:360] acquireMachinesLock for multinode-763000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:30:55.555914    6999 start.go:364] duration metric: took 33.875µs to acquireMachinesLock for "multinode-763000"
	I0610 03:30:55.555924    6999 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:30:55.555931    6999 fix.go:54] fixHost starting: 
	I0610 03:30:55.556068    6999 fix.go:112] recreateIfNeeded on multinode-763000: state=Stopped err=<nil>
	W0610 03:30:55.556078    6999 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:30:55.559196    6999 out.go:177] * Restarting existing qemu2 VM for "multinode-763000" ...
	I0610 03:30:55.567003    6999 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:e5:88:e3:de:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/disk.qcow2
	I0610 03:30:55.569295    6999 main.go:141] libmachine: STDOUT: 
	I0610 03:30:55.569315    6999 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:30:55.569346    6999 fix.go:56] duration metric: took 13.415458ms for fixHost
	I0610 03:30:55.569350    6999 start.go:83] releasing machines lock for "multinode-763000", held for 13.431125ms
	W0610 03:30:55.569360    6999 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:30:55.569394    6999 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:30:55.569399    6999 start.go:728] Will try again in 5 seconds ...
	I0610 03:31:00.571499    6999 start.go:360] acquireMachinesLock for multinode-763000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:31:00.571986    6999 start.go:364] duration metric: took 364.291µs to acquireMachinesLock for "multinode-763000"
	I0610 03:31:00.572117    6999 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:31:00.572137    6999 fix.go:54] fixHost starting: 
	I0610 03:31:00.572911    6999 fix.go:112] recreateIfNeeded on multinode-763000: state=Stopped err=<nil>
	W0610 03:31:00.572945    6999 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:31:00.577507    6999 out.go:177] * Restarting existing qemu2 VM for "multinode-763000" ...
	I0610 03:31:00.582575    6999 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:e5:88:e3:de:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/disk.qcow2
	I0610 03:31:00.592454    6999 main.go:141] libmachine: STDOUT: 
	I0610 03:31:00.592539    6999 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:31:00.592636    6999 fix.go:56] duration metric: took 20.498208ms for fixHost
	I0610 03:31:00.592655    6999 start.go:83] releasing machines lock for "multinode-763000", held for 20.64375ms
	W0610 03:31:00.592864    6999 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-763000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-763000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:31:00.600326    6999 out.go:177] 
	W0610 03:31:00.604439    6999 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:31:00.604467    6999 out.go:239] * 
	* 
	W0610 03:31:00.607030    6999 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:31:00.615436    6999 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-763000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-763000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000: exit status 7 (33.6755ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-763000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-763000 node delete m03: exit status 83 (46.127917ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-763000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-763000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-763000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-763000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-763000 status --alsologtostderr: exit status 7 (29.330875ms)

                                                
                                                
-- stdout --
	multinode-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:31:00.804988    7013 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:31:00.805149    7013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:31:00.805152    7013 out.go:304] Setting ErrFile to fd 2...
	I0610 03:31:00.805154    7013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:31:00.805289    7013 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:31:00.805399    7013 out.go:298] Setting JSON to false
	I0610 03:31:00.805409    7013 mustload.go:65] Loading cluster: multinode-763000
	I0610 03:31:00.805466    7013 notify.go:220] Checking for updates...
	I0610 03:31:00.805599    7013 config.go:182] Loaded profile config "multinode-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:31:00.805608    7013 status.go:255] checking status of multinode-763000 ...
	I0610 03:31:00.805825    7013 status.go:330] multinode-763000 host status = "Stopped" (err=<nil>)
	I0610 03:31:00.805829    7013 status.go:343] host is not running, skipping remaining checks
	I0610 03:31:00.805831    7013 status.go:257] multinode-763000 status: &{Name:multinode-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-763000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000: exit status 7 (30.119042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-763000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-763000 stop: (1.875919792s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-763000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-763000 status: exit status 7 (65.372958ms)

                                                
                                                
-- stdout --
	multinode-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-763000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-763000 status --alsologtostderr: exit status 7 (32.503125ms)

                                                
                                                
-- stdout --
	multinode-763000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:31:02.809674    7032 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:31:02.809801    7032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:31:02.809805    7032 out.go:304] Setting ErrFile to fd 2...
	I0610 03:31:02.809807    7032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:31:02.809923    7032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:31:02.810038    7032 out.go:298] Setting JSON to false
	I0610 03:31:02.810049    7032 mustload.go:65] Loading cluster: multinode-763000
	I0610 03:31:02.810105    7032 notify.go:220] Checking for updates...
	I0610 03:31:02.810246    7032 config.go:182] Loaded profile config "multinode-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:31:02.810252    7032 status.go:255] checking status of multinode-763000 ...
	I0610 03:31:02.810475    7032 status.go:330] multinode-763000 host status = "Stopped" (err=<nil>)
	I0610 03:31:02.810479    7032 status.go:343] host is not running, skipping remaining checks
	I0610 03:31:02.810481    7032 status.go:257] multinode-763000 status: &{Name:multinode-763000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-763000 status --alsologtostderr": multinode-763000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-763000 status --alsologtostderr": multinode-763000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000: exit status 7 (30.091416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-763000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-763000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.177432917s)

                                                
                                                
-- stdout --
	* [multinode-763000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-763000" primary control-plane node in "multinode-763000" cluster
	* Restarting existing qemu2 VM for "multinode-763000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-763000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:31:02.868898    7036 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:31:02.869049    7036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:31:02.869052    7036 out.go:304] Setting ErrFile to fd 2...
	I0610 03:31:02.869054    7036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:31:02.869197    7036 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:31:02.870179    7036 out.go:298] Setting JSON to false
	I0610 03:31:02.886387    7036 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5433,"bootTime":1718010029,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:31:02.886450    7036 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:31:02.890755    7036 out.go:177] * [multinode-763000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:31:02.898632    7036 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:31:02.902674    7036 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:31:02.898677    7036 notify.go:220] Checking for updates...
	I0610 03:31:02.905681    7036 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:31:02.908600    7036 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:31:02.911665    7036 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:31:02.914671    7036 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:31:02.917882    7036 config.go:182] Loaded profile config "multinode-763000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:31:02.918131    7036 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:31:02.922606    7036 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 03:31:02.929621    7036 start.go:297] selected driver: qemu2
	I0610 03:31:02.929626    7036 start.go:901] validating driver "qemu2" against &{Name:multinode-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:multinode-763000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:31:02.929663    7036 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:31:02.931955    7036 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:31:02.931984    7036 cni.go:84] Creating CNI manager for ""
	I0610 03:31:02.931990    7036 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 03:31:02.932035    7036 start.go:340] cluster config:
	{Name:multinode-763000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-763000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:31:02.936347    7036 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:31:02.943655    7036 out.go:177] * Starting "multinode-763000" primary control-plane node in "multinode-763000" cluster
	I0610 03:31:02.947552    7036 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:31:02.947565    7036 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:31:02.947575    7036 cache.go:56] Caching tarball of preloaded images
	I0610 03:31:02.947622    7036 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:31:02.947627    7036 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:31:02.947693    7036 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/multinode-763000/config.json ...
	I0610 03:31:02.948170    7036 start.go:360] acquireMachinesLock for multinode-763000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:31:02.948201    7036 start.go:364] duration metric: took 24.75µs to acquireMachinesLock for "multinode-763000"
	I0610 03:31:02.948210    7036 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:31:02.948218    7036 fix.go:54] fixHost starting: 
	I0610 03:31:02.948342    7036 fix.go:112] recreateIfNeeded on multinode-763000: state=Stopped err=<nil>
	W0610 03:31:02.948351    7036 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:31:02.951634    7036 out.go:177] * Restarting existing qemu2 VM for "multinode-763000" ...
	I0610 03:31:02.959679    7036 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:e5:88:e3:de:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/disk.qcow2
	I0610 03:31:02.961726    7036 main.go:141] libmachine: STDOUT: 
	I0610 03:31:02.961745    7036 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:31:02.961776    7036 fix.go:56] duration metric: took 13.557667ms for fixHost
	I0610 03:31:02.961781    7036 start.go:83] releasing machines lock for "multinode-763000", held for 13.575666ms
	W0610 03:31:02.961787    7036 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:31:02.961821    7036 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:31:02.961826    7036 start.go:728] Will try again in 5 seconds ...
	I0610 03:31:07.963940    7036 start.go:360] acquireMachinesLock for multinode-763000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:31:07.964336    7036 start.go:364] duration metric: took 290.208µs to acquireMachinesLock for "multinode-763000"
	I0610 03:31:07.964444    7036 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:31:07.964465    7036 fix.go:54] fixHost starting: 
	I0610 03:31:07.965234    7036 fix.go:112] recreateIfNeeded on multinode-763000: state=Stopped err=<nil>
	W0610 03:31:07.965267    7036 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:31:07.973780    7036 out.go:177] * Restarting existing qemu2 VM for "multinode-763000" ...
	I0610 03:31:07.976869    7036 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:e5:88:e3:de:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/multinode-763000/disk.qcow2
	I0610 03:31:07.986741    7036 main.go:141] libmachine: STDOUT: 
	I0610 03:31:07.986826    7036 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:31:07.986930    7036 fix.go:56] duration metric: took 22.463833ms for fixHost
	I0610 03:31:07.986956    7036 start.go:83] releasing machines lock for "multinode-763000", held for 22.595ms
	W0610 03:31:07.987188    7036 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-763000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-763000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:31:07.992086    7036 out.go:177] 
	W0610 03:31:07.995746    7036 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:31:07.995801    7036 out.go:239] * 
	* 
	W0610 03:31:07.998720    7036 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:31:08.005723    7036 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-763000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000: exit status 7 (67.902167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-763000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-763000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-763000-m01 --driver=qemu2 : exit status 80 (9.886941083s)

                                                
                                                
-- stdout --
	* [multinode-763000-m01] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-763000-m01" primary control-plane node in "multinode-763000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-763000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-763000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-763000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-763000-m02 --driver=qemu2 : exit status 80 (9.824641167s)

                                                
                                                
-- stdout --
	* [multinode-763000-m02] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-763000-m02" primary control-plane node in "multinode-763000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-763000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-763000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-763000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-763000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-763000: exit status 83 (82.563ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-763000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-763000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-763000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-763000 -n multinode-763000: exit status 7 (30.448375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-763000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.96s)

                                                
                                    
x
+
TestPreload (10.09s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-907000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-907000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.923368s)

                                                
                                                
-- stdout --
	* [test-preload-907000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-907000" primary control-plane node in "test-preload-907000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-907000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:31:28.211392    7099 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:31:28.211688    7099 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:31:28.211697    7099 out.go:304] Setting ErrFile to fd 2...
	I0610 03:31:28.211700    7099 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:31:28.212357    7099 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:31:28.213639    7099 out.go:298] Setting JSON to false
	I0610 03:31:28.230426    7099 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5459,"bootTime":1718010029,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:31:28.230488    7099 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:31:28.237039    7099 out.go:177] * [test-preload-907000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:31:28.244980    7099 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:31:28.245028    7099 notify.go:220] Checking for updates...
	I0610 03:31:28.250021    7099 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:31:28.252926    7099 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:31:28.255980    7099 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:31:28.259000    7099 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:31:28.261912    7099 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:31:28.265334    7099 config.go:182] Loaded profile config "multinode-763000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:31:28.265404    7099 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:31:28.269969    7099 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:31:28.276951    7099 start.go:297] selected driver: qemu2
	I0610 03:31:28.276958    7099 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:31:28.276964    7099 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:31:28.279340    7099 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:31:28.282928    7099 out.go:177] * Automatically selected the socket_vmnet network
	I0610 03:31:28.285925    7099 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:31:28.285958    7099 cni.go:84] Creating CNI manager for ""
	I0610 03:31:28.285964    7099 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:31:28.285968    7099 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 03:31:28.285995    7099 start.go:340] cluster config:
	{Name:test-preload-907000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:31:28.290751    7099 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:31:28.297983    7099 out.go:177] * Starting "test-preload-907000" primary control-plane node in "test-preload-907000" cluster
	I0610 03:31:28.301985    7099 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0610 03:31:28.302073    7099 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/test-preload-907000/config.json ...
	I0610 03:31:28.302087    7099 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/test-preload-907000/config.json: {Name:mk7a8c6fa1aa84f633187d9cba50834b9b656c99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:31:28.302102    7099 cache.go:107] acquiring lock: {Name:mkbb998c36a2212d49da3a6e16d0729d21134180 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:31:28.302118    7099 cache.go:107] acquiring lock: {Name:mk443c263711bf7779de3a32cd668641347933e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:31:28.302123    7099 cache.go:107] acquiring lock: {Name:mkea841c09ce988107774e69498c768f129c9f05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:31:28.302307    7099 cache.go:107] acquiring lock: {Name:mk1dc4bba9d6d8c2d7ad7d55326502eb8665c691 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:31:28.302362    7099 cache.go:107] acquiring lock: {Name:mkf31fdbd50ff6516342c84a791fec08e3160b3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:31:28.302320    7099 cache.go:107] acquiring lock: {Name:mk744e6a12b2c9c388f8c2d3d5f780e5ff2232fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:31:28.302351    7099 cache.go:107] acquiring lock: {Name:mk18cd41f51b8e29b62295309cfb366fd205343c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:31:28.302449    7099 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 03:31:28.302466    7099 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0610 03:31:28.302467    7099 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0610 03:31:28.302507    7099 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0610 03:31:28.302517    7099 cache.go:107] acquiring lock: {Name:mkbddaa1c92445e06b89ea58e94f3dc8a8ef716b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:31:28.302562    7099 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0610 03:31:28.302534    7099 start.go:360] acquireMachinesLock for test-preload-907000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:31:28.302664    7099 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0610 03:31:28.302668    7099 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 03:31:28.302689    7099 start.go:364] duration metric: took 35.084µs to acquireMachinesLock for "test-preload-907000"
	I0610 03:31:28.302719    7099 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0610 03:31:28.302703    7099 start.go:93] Provisioning new machine with config: &{Name:test-preload-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:31:28.302756    7099 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:31:28.310979    7099 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 03:31:28.316919    7099 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0610 03:31:28.317636    7099 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0610 03:31:28.317708    7099 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0610 03:31:28.317752    7099 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0610 03:31:28.317798    7099 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 03:31:28.320373    7099 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0610 03:31:28.320437    7099 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 03:31:28.320527    7099 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0610 03:31:28.329012    7099 start.go:159] libmachine.API.Create for "test-preload-907000" (driver="qemu2")
	I0610 03:31:28.329034    7099 client.go:168] LocalClient.Create starting
	I0610 03:31:28.329098    7099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:31:28.329130    7099 main.go:141] libmachine: Decoding PEM data...
	I0610 03:31:28.329145    7099 main.go:141] libmachine: Parsing certificate...
	I0610 03:31:28.329190    7099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:31:28.329213    7099 main.go:141] libmachine: Decoding PEM data...
	I0610 03:31:28.329220    7099 main.go:141] libmachine: Parsing certificate...
	I0610 03:31:28.329594    7099 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:31:28.475433    7099 main.go:141] libmachine: Creating SSH key...
	I0610 03:31:28.534496    7099 main.go:141] libmachine: Creating Disk image...
	I0610 03:31:28.534515    7099 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:31:28.534774    7099 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/test-preload-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/test-preload-907000/disk.qcow2
	I0610 03:31:28.548209    7099 main.go:141] libmachine: STDOUT: 
	I0610 03:31:28.548233    7099 main.go:141] libmachine: STDERR: 
	I0610 03:31:28.548282    7099 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/test-preload-907000/disk.qcow2 +20000M
	I0610 03:31:28.560691    7099 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:31:28.560718    7099 main.go:141] libmachine: STDERR: 
	I0610 03:31:28.560758    7099 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/test-preload-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/test-preload-907000/disk.qcow2
	I0610 03:31:28.560762    7099 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:31:28.560794    7099 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/test-preload-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/test-preload-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/test-preload-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:b1:85:2b:7b:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/test-preload-907000/disk.qcow2
	I0610 03:31:28.562612    7099 main.go:141] libmachine: STDOUT: 
	I0610 03:31:28.562645    7099 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:31:28.562668    7099 client.go:171] duration metric: took 233.632ms to LocalClient.Create
	W0610 03:31:29.231006    7099 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0610 03:31:29.231153    7099 cache.go:162] opening:  /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0610 03:31:29.257531    7099 cache.go:162] opening:  /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0610 03:31:29.262839    7099 cache.go:162] opening:  /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0610 03:31:29.286737    7099 cache.go:162] opening:  /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0610 03:31:29.410312    7099 cache.go:162] opening:  /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0610 03:31:29.447540    7099 cache.go:162] opening:  /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0610 03:31:29.449056    7099 cache.go:162] opening:  /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0610 03:31:29.513233    7099 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0610 03:31:29.513310    7099 cache.go:162] opening:  /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0610 03:31:29.615847    7099 cache.go:157] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0610 03:31:29.615928    7099 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.313680208s
	I0610 03:31:29.615970    7099 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0610 03:31:30.310297    7099 cache.go:157] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0610 03:31:30.310373    7099 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.00829975s
	I0610 03:31:30.310415    7099 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0610 03:31:30.562975    7099 start.go:128] duration metric: took 2.260220625s to createHost
	I0610 03:31:30.563038    7099 start.go:83] releasing machines lock for "test-preload-907000", held for 2.260371625s
	W0610 03:31:30.563101    7099 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:31:30.573461    7099 out.go:177] * Deleting "test-preload-907000" in qemu2 ...
	W0610 03:31:30.601736    7099 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:31:30.601772    7099 start.go:728] Will try again in 5 seconds ...
	I0610 03:31:31.599844    7099 cache.go:157] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0610 03:31:31.599892    7099 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.297660458s
	I0610 03:31:31.599922    7099 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0610 03:31:32.259144    7099 cache.go:157] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0610 03:31:32.259197    7099 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.956893417s
	I0610 03:31:32.259224    7099 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0610 03:31:32.829874    7099 cache.go:157] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0610 03:31:32.829955    7099 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.527914416s
	I0610 03:31:32.829987    7099 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0610 03:31:34.018398    7099 cache.go:157] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0610 03:31:34.018461    7099 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.716449292s
	I0610 03:31:34.018488    7099 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0610 03:31:34.586310    7099 cache.go:157] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0610 03:31:34.586367    7099 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.284096166s
	I0610 03:31:34.586397    7099 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0610 03:31:35.602118    7099 start.go:360] acquireMachinesLock for test-preload-907000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:31:35.602607    7099 start.go:364] duration metric: took 388.167µs to acquireMachinesLock for "test-preload-907000"
	I0610 03:31:35.602731    7099 start.go:93] Provisioning new machine with config: &{Name:test-preload-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:31:35.603005    7099 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:31:35.612879    7099 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 03:31:35.662747    7099 start.go:159] libmachine.API.Create for "test-preload-907000" (driver="qemu2")
	I0610 03:31:35.662803    7099 client.go:168] LocalClient.Create starting
	I0610 03:31:35.662936    7099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:31:35.663004    7099 main.go:141] libmachine: Decoding PEM data...
	I0610 03:31:35.663019    7099 main.go:141] libmachine: Parsing certificate...
	I0610 03:31:35.663072    7099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:31:35.663115    7099 main.go:141] libmachine: Decoding PEM data...
	I0610 03:31:35.663129    7099 main.go:141] libmachine: Parsing certificate...
	I0610 03:31:35.663658    7099 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:31:35.815592    7099 main.go:141] libmachine: Creating SSH key...
	I0610 03:31:36.035043    7099 main.go:141] libmachine: Creating Disk image...
	I0610 03:31:36.035055    7099 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:31:36.035252    7099 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/test-preload-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/test-preload-907000/disk.qcow2
	I0610 03:31:36.048483    7099 main.go:141] libmachine: STDOUT: 
	I0610 03:31:36.048502    7099 main.go:141] libmachine: STDERR: 
	I0610 03:31:36.048571    7099 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/test-preload-907000/disk.qcow2 +20000M
	I0610 03:31:36.059980    7099 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:31:36.060010    7099 main.go:141] libmachine: STDERR: 
	I0610 03:31:36.060027    7099 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/test-preload-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/test-preload-907000/disk.qcow2
	I0610 03:31:36.060037    7099 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:31:36.060085    7099 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/test-preload-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/test-preload-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/test-preload-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:a4:ea:82:e6:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/test-preload-907000/disk.qcow2
	I0610 03:31:36.061852    7099 main.go:141] libmachine: STDOUT: 
	I0610 03:31:36.061868    7099 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:31:36.061883    7099 client.go:171] duration metric: took 399.080625ms to LocalClient.Create
	I0610 03:31:37.881299    7099 cache.go:157] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0610 03:31:37.881405    7099 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.579288667s
	I0610 03:31:37.881447    7099 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0610 03:31:37.881507    7099 cache.go:87] Successfully saved all images to host disk.
	I0610 03:31:38.064042    7099 start.go:128] duration metric: took 2.461022333s to createHost
	I0610 03:31:38.064097    7099 start.go:83] releasing machines lock for "test-preload-907000", held for 2.461485709s
	W0610 03:31:38.064375    7099 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:31:38.078908    7099 out.go:177] 
	W0610 03:31:38.081865    7099 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:31:38.081902    7099 out.go:239] * 
	* 
	W0610 03:31:38.084590    7099 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:31:38.092844    7099 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-907000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-06-10 03:31:38.110804 -0700 PDT m=+718.747258084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-907000 -n test-preload-907000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-907000 -n test-preload-907000: exit status 7 (65.314042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-907000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-907000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-907000
--- FAIL: TestPreload (10.09s)

                                                
                                    
x
+
TestScheduledStopUnix (10.04s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-070000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-070000 --memory=2048 --driver=qemu2 : exit status 80 (9.874010292s)

                                                
                                                
-- stdout --
	* [scheduled-stop-070000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-070000" primary control-plane node in "scheduled-stop-070000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-070000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-070000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-070000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-070000" primary control-plane node in "scheduled-stop-070000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-070000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-070000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-06-10 03:31:48.152677 -0700 PDT m=+728.789290542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-070000 -n scheduled-stop-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-070000 -n scheduled-stop-070000: exit status 7 (67.903584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-070000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-070000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-070000
--- FAIL: TestScheduledStopUnix (10.04s)

                                                
                                    
x
+
TestSkaffold (13.47s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe938321033 version
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-617000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-617000 --memory=2600 --driver=qemu2 : exit status 80 (9.826279166s)

                                                
                                                
-- stdout --
	* [skaffold-617000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-617000" primary control-plane node in "skaffold-617000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-617000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-617000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-617000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-617000" primary control-plane node in "skaffold-617000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-617000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-617000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-06-10 03:32:01.624743 -0700 PDT m=+742.261572251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-617000 -n skaffold-617000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-617000 -n skaffold-617000: exit status 7 (63.268292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-617000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-617000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-617000
--- FAIL: TestSkaffold (13.47s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (599.81s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1279179595 start -p running-upgrade-479000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1279179595 start -p running-upgrade-479000 --memory=2200 --vm-driver=qemu2 : (58.137012708s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-479000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-479000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m25.762173417s)

                                                
                                                
-- stdout --
	* [running-upgrade-479000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-479000" primary control-plane node in "running-upgrade-479000" cluster
	* Updating the running qemu2 "running-upgrade-479000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:33:42.902606    7510 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:33:42.902736    7510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:33:42.902740    7510 out.go:304] Setting ErrFile to fd 2...
	I0610 03:33:42.902742    7510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:33:42.902857    7510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:33:42.903889    7510 out.go:298] Setting JSON to false
	I0610 03:33:42.920764    7510 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5593,"bootTime":1718010029,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:33:42.920830    7510 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:33:42.925331    7510 out.go:177] * [running-upgrade-479000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:33:42.932282    7510 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:33:42.937227    7510 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:33:42.932322    7510 notify.go:220] Checking for updates...
	I0610 03:33:42.945260    7510 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:33:42.948251    7510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:33:42.951255    7510 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:33:42.954246    7510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:33:42.957514    7510 config.go:182] Loaded profile config "running-upgrade-479000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 03:33:42.961274    7510 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0610 03:33:42.964305    7510 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:33:42.968254    7510 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 03:33:42.975264    7510 start.go:297] selected driver: qemu2
	I0610 03:33:42.975269    7510 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-479000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51096 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-479000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0610 03:33:42.975330    7510 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:33:42.977300    7510 cni.go:84] Creating CNI manager for ""
	I0610 03:33:42.977315    7510 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:33:42.977332    7510 start.go:340] cluster config:
	{Name:running-upgrade-479000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51096 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-479000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0610 03:33:42.977383    7510 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:33:42.985280    7510 out.go:177] * Starting "running-upgrade-479000" primary control-plane node in "running-upgrade-479000" cluster
	I0610 03:33:42.988173    7510 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0610 03:33:42.988188    7510 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0610 03:33:42.988198    7510 cache.go:56] Caching tarball of preloaded images
	I0610 03:33:42.988260    7510 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:33:42.988265    7510 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0610 03:33:42.988323    7510 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/config.json ...
	I0610 03:33:42.988778    7510 start.go:360] acquireMachinesLock for running-upgrade-479000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:33:42.988809    7510 start.go:364] duration metric: took 25.583µs to acquireMachinesLock for "running-upgrade-479000"
	I0610 03:33:42.988817    7510 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:33:42.988822    7510 fix.go:54] fixHost starting: 
	I0610 03:33:42.989518    7510 fix.go:112] recreateIfNeeded on running-upgrade-479000: state=Running err=<nil>
	W0610 03:33:42.989525    7510 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:33:42.998077    7510 out.go:177] * Updating the running qemu2 "running-upgrade-479000" VM ...
	I0610 03:33:43.002262    7510 machine.go:94] provisionDockerMachine start ...
	I0610 03:33:43.002299    7510 main.go:141] libmachine: Using SSH client type: native
	I0610 03:33:43.002411    7510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fca980] 0x102fcd1e0 <nil>  [] 0s} localhost 51064 <nil> <nil>}
	I0610 03:33:43.002416    7510 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 03:33:43.076552    7510 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-479000
	
	I0610 03:33:43.076567    7510 buildroot.go:166] provisioning hostname "running-upgrade-479000"
	I0610 03:33:43.076619    7510 main.go:141] libmachine: Using SSH client type: native
	I0610 03:33:43.076732    7510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fca980] 0x102fcd1e0 <nil>  [] 0s} localhost 51064 <nil> <nil>}
	I0610 03:33:43.076738    7510 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-479000 && echo "running-upgrade-479000" | sudo tee /etc/hostname
	I0610 03:33:43.149163    7510 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-479000
	
	I0610 03:33:43.149225    7510 main.go:141] libmachine: Using SSH client type: native
	I0610 03:33:43.149341    7510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fca980] 0x102fcd1e0 <nil>  [] 0s} localhost 51064 <nil> <nil>}
	I0610 03:33:43.149350    7510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-479000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-479000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-479000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 03:33:43.220494    7510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 03:33:43.220505    7510 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19046-4812/.minikube CaCertPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19046-4812/.minikube}
	I0610 03:33:43.220513    7510 buildroot.go:174] setting up certificates
	I0610 03:33:43.220517    7510 provision.go:84] configureAuth start
	I0610 03:33:43.220522    7510 provision.go:143] copyHostCerts
	I0610 03:33:43.220589    7510 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-4812/.minikube/ca.pem, removing ...
	I0610 03:33:43.220599    7510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-4812/.minikube/ca.pem
	I0610 03:33:43.220766    7510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19046-4812/.minikube/ca.pem (1078 bytes)
	I0610 03:33:43.220974    7510 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-4812/.minikube/cert.pem, removing ...
	I0610 03:33:43.220978    7510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-4812/.minikube/cert.pem
	I0610 03:33:43.221044    7510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19046-4812/.minikube/cert.pem (1123 bytes)
	I0610 03:33:43.221155    7510 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-4812/.minikube/key.pem, removing ...
	I0610 03:33:43.221159    7510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-4812/.minikube/key.pem
	I0610 03:33:43.221213    7510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19046-4812/.minikube/key.pem (1675 bytes)
	I0610 03:33:43.221308    7510 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-479000 san=[127.0.0.1 localhost minikube running-upgrade-479000]
	I0610 03:33:43.397905    7510 provision.go:177] copyRemoteCerts
	I0610 03:33:43.397955    7510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 03:33:43.397970    7510 sshutil.go:53] new ssh client: &{IP:localhost Port:51064 SSHKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/running-upgrade-479000/id_rsa Username:docker}
	I0610 03:33:43.435787    7510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 03:33:43.443043    7510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0610 03:33:43.449915    7510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 03:33:43.456683    7510 provision.go:87] duration metric: took 236.159333ms to configureAuth
	I0610 03:33:43.456693    7510 buildroot.go:189] setting minikube options for container-runtime
	I0610 03:33:43.456805    7510 config.go:182] Loaded profile config "running-upgrade-479000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 03:33:43.456843    7510 main.go:141] libmachine: Using SSH client type: native
	I0610 03:33:43.456936    7510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fca980] 0x102fcd1e0 <nil>  [] 0s} localhost 51064 <nil> <nil>}
	I0610 03:33:43.456940    7510 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 03:33:43.526219    7510 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 03:33:43.526229    7510 buildroot.go:70] root file system type: tmpfs
	I0610 03:33:43.526280    7510 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 03:33:43.526333    7510 main.go:141] libmachine: Using SSH client type: native
	I0610 03:33:43.526448    7510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fca980] 0x102fcd1e0 <nil>  [] 0s} localhost 51064 <nil> <nil>}
	I0610 03:33:43.526480    7510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 03:33:43.599612    7510 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 03:33:43.599666    7510 main.go:141] libmachine: Using SSH client type: native
	I0610 03:33:43.599794    7510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fca980] 0x102fcd1e0 <nil>  [] 0s} localhost 51064 <nil> <nil>}
	I0610 03:33:43.599803    7510 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 03:33:43.670880    7510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 03:33:43.670890    7510 machine.go:97] duration metric: took 668.633542ms to provisionDockerMachine
	I0610 03:33:43.670895    7510 start.go:293] postStartSetup for "running-upgrade-479000" (driver="qemu2")
	I0610 03:33:43.670900    7510 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 03:33:43.670956    7510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 03:33:43.670966    7510 sshutil.go:53] new ssh client: &{IP:localhost Port:51064 SSHKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/running-upgrade-479000/id_rsa Username:docker}
	I0610 03:33:43.710344    7510 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 03:33:43.711693    7510 info.go:137] Remote host: Buildroot 2021.02.12
	I0610 03:33:43.711700    7510 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-4812/.minikube/addons for local assets ...
	I0610 03:33:43.711772    7510 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-4812/.minikube/files for local assets ...
	I0610 03:33:43.711892    7510 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19046-4812/.minikube/files/etc/ssl/certs/56872.pem -> 56872.pem in /etc/ssl/certs
	I0610 03:33:43.712023    7510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 03:33:43.715073    7510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/files/etc/ssl/certs/56872.pem --> /etc/ssl/certs/56872.pem (1708 bytes)
	I0610 03:33:43.722211    7510 start.go:296] duration metric: took 51.31275ms for postStartSetup
	I0610 03:33:43.722228    7510 fix.go:56] duration metric: took 733.418667ms for fixHost
	I0610 03:33:43.722259    7510 main.go:141] libmachine: Using SSH client type: native
	I0610 03:33:43.722361    7510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fca980] 0x102fcd1e0 <nil>  [] 0s} localhost 51064 <nil> <nil>}
	I0610 03:33:43.722366    7510 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0610 03:33:43.792290    7510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718015623.538561138
	
	I0610 03:33:43.792296    7510 fix.go:216] guest clock: 1718015623.538561138
	I0610 03:33:43.792300    7510 fix.go:229] Guest: 2024-06-10 03:33:43.538561138 -0700 PDT Remote: 2024-06-10 03:33:43.722229 -0700 PDT m=+0.838882167 (delta=-183.667862ms)
	I0610 03:33:43.792309    7510 fix.go:200] guest clock delta is within tolerance: -183.667862ms
	I0610 03:33:43.792314    7510 start.go:83] releasing machines lock for "running-upgrade-479000", held for 803.513209ms
	I0610 03:33:43.792363    7510 ssh_runner.go:195] Run: cat /version.json
	I0610 03:33:43.792372    7510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 03:33:43.792371    7510 sshutil.go:53] new ssh client: &{IP:localhost Port:51064 SSHKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/running-upgrade-479000/id_rsa Username:docker}
	I0610 03:33:43.792386    7510 sshutil.go:53] new ssh client: &{IP:localhost Port:51064 SSHKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/running-upgrade-479000/id_rsa Username:docker}
	W0610 03:33:43.792970    7510 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51064: connect: connection refused
	I0610 03:33:43.792994    7510 retry.go:31] will retry after 310.267794ms: dial tcp [::1]:51064: connect: connection refused
	W0610 03:33:44.152134    7510 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0610 03:33:44.152314    7510 ssh_runner.go:195] Run: systemctl --version
	I0610 03:33:44.156093    7510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 03:33:44.159285    7510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 03:33:44.159338    7510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0610 03:33:44.164963    7510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0610 03:33:44.172100    7510 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 03:33:44.172114    7510 start.go:494] detecting cgroup driver to use...
	I0610 03:33:44.172304    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 03:33:44.180232    7510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0610 03:33:44.184153    7510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 03:33:44.188005    7510 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 03:33:44.188036    7510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 03:33:44.191865    7510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 03:33:44.195322    7510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 03:33:44.198775    7510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 03:33:44.202001    7510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 03:33:44.204886    7510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 03:33:44.207622    7510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 03:33:44.210561    7510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 03:33:44.213520    7510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 03:33:44.216048    7510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 03:33:44.219201    7510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 03:33:44.314124    7510 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 03:33:44.324669    7510 start.go:494] detecting cgroup driver to use...
	I0610 03:33:44.324745    7510 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 03:33:44.330163    7510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 03:33:44.334685    7510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 03:33:44.343584    7510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 03:33:44.348721    7510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 03:33:44.353271    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 03:33:44.358830    7510 ssh_runner.go:195] Run: which cri-dockerd
	I0610 03:33:44.360316    7510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 03:33:44.363391    7510 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 03:33:44.368501    7510 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 03:33:44.462232    7510 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 03:33:44.560581    7510 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 03:33:44.560638    7510 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 03:33:44.565486    7510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 03:33:44.658271    7510 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 03:33:48.015020    7510 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.356783084s)
	I0610 03:33:48.015074    7510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 03:33:48.019890    7510 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0610 03:33:48.026677    7510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 03:33:48.031374    7510 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 03:33:48.125729    7510 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 03:33:48.204773    7510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 03:33:48.282953    7510 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 03:33:48.288942    7510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 03:33:48.293353    7510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 03:33:48.382678    7510 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 03:33:48.423499    7510 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 03:33:48.423584    7510 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 03:33:48.426137    7510 start.go:562] Will wait 60s for crictl version
	I0610 03:33:48.426174    7510 ssh_runner.go:195] Run: which crictl
	I0610 03:33:48.427523    7510 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 03:33:48.440001    7510 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0610 03:33:48.440065    7510 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 03:33:48.452477    7510 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 03:33:48.470459    7510 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0610 03:33:48.470530    7510 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0610 03:33:48.472060    7510 kubeadm.go:877] updating cluster {Name:running-upgrade-479000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51096 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-479000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0610 03:33:48.472104    7510 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0610 03:33:48.472141    7510 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 03:33:48.482439    7510 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 03:33:48.482447    7510 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0610 03:33:48.482492    7510 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 03:33:48.485741    7510 ssh_runner.go:195] Run: which lz4
	I0610 03:33:48.487021    7510 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0610 03:33:48.488277    7510 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 03:33:48.488286    7510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0610 03:33:49.191636    7510 docker.go:649] duration metric: took 704.658459ms to copy over tarball
	I0610 03:33:49.191691    7510 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 03:33:50.403703    7510 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.21201625s)
	I0610 03:33:50.403719    7510 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 03:33:50.419676    7510 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 03:33:50.423060    7510 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0610 03:33:50.428052    7510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 03:33:50.508401    7510 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 03:33:51.857128    7510 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.34872925s)
	I0610 03:33:51.857217    7510 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 03:33:51.873134    7510 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 03:33:51.873144    7510 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0610 03:33:51.873150    7510 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0610 03:33:51.880795    7510 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 03:33:51.881031    7510 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0610 03:33:51.881101    7510 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 03:33:51.881182    7510 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 03:33:51.881205    7510 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0610 03:33:51.881294    7510 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0610 03:33:51.881362    7510 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0610 03:33:51.881767    7510 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0610 03:33:51.890223    7510 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0610 03:33:51.890355    7510 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0610 03:33:51.890686    7510 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0610 03:33:51.890797    7510 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 03:33:51.890787    7510 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0610 03:33:51.891063    7510 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0610 03:33:51.891230    7510 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 03:33:51.891250    7510 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 03:33:52.785643    7510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0610 03:33:52.797546    7510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0610 03:33:52.831387    7510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 03:33:52.835265    7510 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0610 03:33:52.835300    7510 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0610 03:33:52.835312    7510 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0610 03:33:52.835338    7510 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0610 03:33:52.835367    7510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0610 03:33:52.835397    7510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0610 03:33:52.842768    7510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0610 03:33:52.858770    7510 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0610 03:33:52.858800    7510 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 03:33:52.858874    7510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 03:33:52.878223    7510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0610 03:33:52.878360    7510 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0610 03:33:52.878677    7510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0610 03:33:52.878701    7510 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0610 03:33:52.878722    7510 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0610 03:33:52.878761    7510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0610 03:33:52.894820    7510 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0610 03:33:52.894852    7510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0610 03:33:52.895113    7510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0610 03:33:52.902447    7510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0610 03:33:52.928569    7510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	W0610 03:33:52.930872    7510 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0610 03:33:52.930987    7510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0610 03:33:52.933246    7510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0610 03:33:52.957709    7510 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0610 03:33:52.957730    7510 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0610 03:33:52.957783    7510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0610 03:33:52.970423    7510 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0610 03:33:52.970437    7510 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0610 03:33:52.970446    7510 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 03:33:52.970454    7510 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0610 03:33:52.970500    7510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0610 03:33:52.970500    7510 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	W0610 03:33:52.991756    7510 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0610 03:33:52.991841    7510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0610 03:33:52.991872    7510 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 03:33:53.012961    7510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0610 03:33:53.013081    7510 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0610 03:33:53.015328    7510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0610 03:33:53.015417    7510 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0610 03:33:53.046884    7510 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0610 03:33:53.046908    7510 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0610 03:33:53.046917    7510 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 03:33:53.046899    7510 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0610 03:33:53.046931    7510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0610 03:33:53.046959    7510 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 03:33:53.046994    7510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0610 03:33:53.074206    7510 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0610 03:33:53.074221    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0610 03:33:54.092046    7510 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.045073666s)
	I0610 03:33:54.092084    7510 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0610 03:33:54.092147    7510 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load": (1.017923875s)
	I0610 03:33:54.092166    7510 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0610 03:33:54.092205    7510 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0610 03:33:54.092291    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0610 03:33:54.092447    7510 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0610 03:33:54.096317    7510 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0610 03:33:54.096352    7510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0610 03:33:54.178945    7510 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0610 03:33:54.178964    7510 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0610 03:33:54.178970    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0610 03:33:54.312609    7510 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0610 03:33:54.312624    7510 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0610 03:33:54.312631    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0610 03:33:54.549136    7510 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0610 03:33:54.549175    7510 cache_images.go:92] duration metric: took 2.676060708s to LoadCachedImages
	W0610 03:33:54.549223    7510 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0610 03:33:54.549230    7510 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0610 03:33:54.549293    7510 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-479000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-479000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 03:33:54.549353    7510 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 03:33:54.562620    7510 cni.go:84] Creating CNI manager for ""
	I0610 03:33:54.562631    7510 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:33:54.562636    7510 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 03:33:54.562644    7510 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-479000 NodeName:running-upgrade-479000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 03:33:54.562711    7510 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-479000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 03:33:54.562764    7510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0610 03:33:54.565532    7510 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 03:33:54.565556    7510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 03:33:54.568288    7510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0610 03:33:54.573165    7510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 03:33:54.578194    7510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0610 03:33:54.583581    7510 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0610 03:33:54.584821    7510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 03:33:54.668226    7510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 03:33:54.673629    7510 certs.go:68] Setting up /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000 for IP: 10.0.2.15
	I0610 03:33:54.673636    7510 certs.go:194] generating shared ca certs ...
	I0610 03:33:54.673644    7510 certs.go:226] acquiring lock for ca certs: {Name:mk21a2158098c453d4ecfbaacf1fd5e5adc33d66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:33:54.673884    7510 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/ca.key
	I0610 03:33:54.673936    7510 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/proxy-client-ca.key
	I0610 03:33:54.673943    7510 certs.go:256] generating profile certs ...
	I0610 03:33:54.674001    7510 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/client.key
	I0610 03:33:54.674016    7510 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/apiserver.key.15bddc67
	I0610 03:33:54.674024    7510 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/apiserver.crt.15bddc67 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0610 03:33:55.114856    7510 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/apiserver.crt.15bddc67 ...
	I0610 03:33:55.114868    7510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/apiserver.crt.15bddc67: {Name:mke83e04130ca99fb11d0a7ef45b67f8ba8fec5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:33:55.115138    7510 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/apiserver.key.15bddc67 ...
	I0610 03:33:55.115143    7510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/apiserver.key.15bddc67: {Name:mka2d9de06c5f838ac9d2cc3c4078aa2748c2190 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:33:55.115261    7510 certs.go:381] copying /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/apiserver.crt.15bddc67 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/apiserver.crt
	I0610 03:33:55.115455    7510 certs.go:385] copying /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/apiserver.key.15bddc67 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/apiserver.key
	I0610 03:33:55.115621    7510 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/proxy-client.key
	I0610 03:33:55.115748    7510 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/5687.pem (1338 bytes)
	W0610 03:33:55.115777    7510 certs.go:480] ignoring /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/5687_empty.pem, impossibly tiny 0 bytes
	I0610 03:33:55.115783    7510 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 03:33:55.115803    7510 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem (1078 bytes)
	I0610 03:33:55.115821    7510 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem (1123 bytes)
	I0610 03:33:55.115837    7510 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/key.pem (1675 bytes)
	I0610 03:33:55.115874    7510 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/files/etc/ssl/certs/56872.pem (1708 bytes)
	I0610 03:33:55.116205    7510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 03:33:55.123710    7510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 03:33:55.132077    7510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 03:33:55.138635    7510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 03:33:55.146245    7510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0610 03:33:55.153365    7510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 03:33:55.160802    7510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 03:33:55.167928    7510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 03:33:55.174924    7510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/files/etc/ssl/certs/56872.pem --> /usr/share/ca-certificates/56872.pem (1708 bytes)
	I0610 03:33:55.181324    7510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 03:33:55.187961    7510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/5687.pem --> /usr/share/ca-certificates/5687.pem (1338 bytes)
	I0610 03:33:55.195368    7510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 03:33:55.200785    7510 ssh_runner.go:195] Run: openssl version
	I0610 03:33:55.202784    7510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/56872.pem && ln -fs /usr/share/ca-certificates/56872.pem /etc/ssl/certs/56872.pem"
	I0610 03:33:55.206116    7510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/56872.pem
	I0610 03:33:55.207583    7510 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:20 /usr/share/ca-certificates/56872.pem
	I0610 03:33:55.207605    7510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/56872.pem
	I0610 03:33:55.209404    7510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/56872.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 03:33:55.212067    7510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 03:33:55.215165    7510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 03:33:55.216555    7510 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:33 /usr/share/ca-certificates/minikubeCA.pem
	I0610 03:33:55.216574    7510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 03:33:55.218206    7510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 03:33:55.221192    7510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5687.pem && ln -fs /usr/share/ca-certificates/5687.pem /etc/ssl/certs/5687.pem"
	I0610 03:33:55.224152    7510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5687.pem
	I0610 03:33:55.225531    7510 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:20 /usr/share/ca-certificates/5687.pem
	I0610 03:33:55.225547    7510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5687.pem
	I0610 03:33:55.227427    7510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5687.pem /etc/ssl/certs/51391683.0"
	I0610 03:33:55.230296    7510 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 03:33:55.231855    7510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 03:33:55.233460    7510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 03:33:55.235310    7510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 03:33:55.237066    7510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 03:33:55.239077    7510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 03:33:55.240716    7510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 03:33:55.242431    7510 kubeadm.go:391] StartCluster: {Name:running-upgrade-479000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51096 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-479000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0610 03:33:55.242501    7510 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 03:33:55.252811    7510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0610 03:33:55.256845    7510 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0610 03:33:55.256853    7510 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0610 03:33:55.256855    7510 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0610 03:33:55.256881    7510 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0610 03:33:55.259999    7510 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0610 03:33:55.260035    7510 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-479000" does not appear in /Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:33:55.260049    7510 kubeconfig.go:62] /Users/jenkins/minikube-integration/19046-4812/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-479000" cluster setting kubeconfig missing "running-upgrade-479000" context setting]
	I0610 03:33:55.260205    7510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/kubeconfig: {Name:mke25032b58aa44d6357ccc49c0a5254f131209e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:33:55.261094    7510 kapi.go:59] client config for running-upgrade-479000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/client.key", CAFile:"/Users/jenkins/minikube-integration/19046-4812/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104358460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 03:33:55.261891    7510 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0610 03:33:55.264643    7510 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-479000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0610 03:33:55.264648    7510 kubeadm.go:1154] stopping kube-system containers ...
	I0610 03:33:55.264684    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 03:33:55.275907    7510 docker.go:483] Stopping containers: [9fa118b34db1 4278159d0002 c73533bae97c def37e7dd307 a3c577c83424 c3ac3dbc0c62 697a75470493 824abf5cac7a 69f1a3133c99 e0398b3ee412 a1e723aec762 3c5cb13206f1 c8bd364fe8f6]
	I0610 03:33:55.275972    7510 ssh_runner.go:195] Run: docker stop 9fa118b34db1 4278159d0002 c73533bae97c def37e7dd307 a3c577c83424 c3ac3dbc0c62 697a75470493 824abf5cac7a 69f1a3133c99 e0398b3ee412 a1e723aec762 3c5cb13206f1 c8bd364fe8f6
	I0610 03:33:55.286819    7510 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0610 03:33:55.373163    7510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 03:33:55.376900    7510 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 Jun 10 10:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Jun 10 10:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jun 10 10:33 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Jun 10 10:33 /etc/kubernetes/scheduler.conf
	
	I0610 03:33:55.376936    7510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/admin.conf
	I0610 03:33:55.379987    7510 kubeadm.go:162] "https://control-plane.minikube.internal:51096" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0610 03:33:55.380008    7510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 03:33:55.383288    7510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/kubelet.conf
	I0610 03:33:55.386580    7510 kubeadm.go:162] "https://control-plane.minikube.internal:51096" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0610 03:33:55.386603    7510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 03:33:55.389386    7510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/controller-manager.conf
	I0610 03:33:55.391963    7510 kubeadm.go:162] "https://control-plane.minikube.internal:51096" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0610 03:33:55.391985    7510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 03:33:55.394983    7510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/scheduler.conf
	I0610 03:33:55.397923    7510 kubeadm.go:162] "https://control-plane.minikube.internal:51096" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0610 03:33:55.397945    7510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 03:33:55.400538    7510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 03:33:55.403469    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 03:33:55.424381    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 03:33:56.149912    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0610 03:33:56.384214    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 03:33:56.426769    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0610 03:33:56.448910    7510 api_server.go:52] waiting for apiserver process to appear ...
	I0610 03:33:56.448993    7510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 03:33:56.951112    7510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 03:33:57.451049    7510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 03:33:57.456545    7510 api_server.go:72] duration metric: took 1.007654625s to wait for apiserver process to appear ...
	I0610 03:33:57.456553    7510 api_server.go:88] waiting for apiserver healthz status ...
	I0610 03:33:57.456562    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:34:02.458654    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:34:02.458688    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:34:07.458894    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:34:07.458916    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:34:12.459175    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:34:12.459227    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:34:17.459825    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:34:17.459948    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:34:22.461055    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:34:22.461247    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:34:27.462575    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:34:27.462619    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:34:32.463872    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:34:32.463963    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:34:37.465168    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:34:37.465263    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:34:42.467798    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:34:42.467891    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:34:47.470545    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:34:47.470645    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:34:52.473200    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:34:52.473297    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:34:57.475913    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:34:57.476322    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:34:57.515226    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:34:57.515364    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:34:57.538812    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:34:57.538903    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:34:57.554485    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:34:57.554567    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:34:57.566239    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:34:57.566310    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:34:57.577362    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:34:57.577426    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:34:57.587737    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:34:57.587807    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:34:57.597932    7510 logs.go:276] 0 containers: []
	W0610 03:34:57.597945    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:34:57.597993    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:34:57.608357    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:34:57.608372    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:34:57.608378    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:34:57.642980    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:34:57.642991    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:34:57.647150    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:34:57.647158    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:34:57.661559    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:34:57.661571    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:34:57.680478    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:34:57.680488    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:34:57.696575    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:34:57.696584    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:34:57.707670    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:34:57.707681    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:34:57.719574    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:34:57.719586    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:34:57.793994    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:34:57.794008    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:34:57.819903    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:34:57.819914    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:34:57.831198    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:34:57.831212    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:34:57.847206    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:34:57.847219    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:34:57.864948    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:34:57.864959    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:34:57.876537    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:34:57.876548    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:34:57.901057    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:34:57.901065    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:34:57.912307    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:34:57.912320    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:34:57.923754    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:34:57.923766    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:35:00.437356    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:35:05.439725    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:35:05.440145    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:35:05.479254    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:35:05.479385    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:35:05.501249    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:35:05.501369    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:35:05.517191    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:35:05.517268    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:35:05.529834    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:35:05.529914    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:35:05.540604    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:35:05.540673    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:35:05.551366    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:35:05.551435    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:35:05.566935    7510 logs.go:276] 0 containers: []
	W0610 03:35:05.566946    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:35:05.567017    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:35:05.577744    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:35:05.577762    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:35:05.577767    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:35:05.612276    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:35:05.612288    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:35:05.637407    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:35:05.637420    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:35:05.654998    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:35:05.655008    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:35:05.666456    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:35:05.666467    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:35:05.684294    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:35:05.684303    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:35:05.720779    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:35:05.720788    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:35:05.725316    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:35:05.725324    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:35:05.739391    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:35:05.739404    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:35:05.750939    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:35:05.750950    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:35:05.770120    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:35:05.770132    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:35:05.781737    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:35:05.781750    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:35:05.797108    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:35:05.797121    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:35:05.808282    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:35:05.808293    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:35:05.833297    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:35:05.833305    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:35:05.844324    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:35:05.844333    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:35:05.858633    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:35:05.858645    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:35:08.373087    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:35:13.375810    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:35:13.376279    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:35:13.417642    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:35:13.417782    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:35:13.439552    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:35:13.439670    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:35:13.455227    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:35:13.455299    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:35:13.467884    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:35:13.467967    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:35:13.479443    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:35:13.479515    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:35:13.490336    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:35:13.490414    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:35:13.508844    7510 logs.go:276] 0 containers: []
	W0610 03:35:13.508855    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:35:13.508910    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:35:13.519278    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:35:13.519300    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:35:13.519305    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:35:13.534589    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:35:13.534599    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:35:13.546952    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:35:13.546965    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:35:13.562588    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:35:13.562598    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:35:13.580229    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:35:13.580242    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:35:13.591351    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:35:13.591362    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:35:13.616106    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:35:13.616115    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:35:13.650671    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:35:13.650677    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:35:13.664804    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:35:13.664827    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:35:13.676865    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:35:13.676878    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:35:13.690741    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:35:13.690750    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:35:13.702733    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:35:13.702743    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:35:13.713578    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:35:13.713590    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:35:13.729042    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:35:13.729051    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:35:13.740537    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:35:13.740549    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:35:13.745017    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:35:13.745026    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:35:13.778502    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:35:13.778512    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:35:16.304627    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:35:21.307370    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:35:21.307623    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:35:21.326169    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:35:21.326270    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:35:21.339821    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:35:21.339894    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:35:21.351607    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:35:21.351666    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:35:21.362702    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:35:21.362773    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:35:21.372847    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:35:21.372910    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:35:21.383376    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:35:21.383442    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:35:21.396504    7510 logs.go:276] 0 containers: []
	W0610 03:35:21.396515    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:35:21.396570    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:35:21.412204    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:35:21.412223    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:35:21.412228    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:35:21.423682    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:35:21.423696    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:35:21.436915    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:35:21.436926    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:35:21.451212    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:35:21.451222    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:35:21.462145    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:35:21.462153    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:35:21.479820    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:35:21.479838    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:35:21.500999    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:35:21.501008    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:35:21.514686    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:35:21.514696    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:35:21.525685    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:35:21.525693    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:35:21.537154    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:35:21.537163    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:35:21.573252    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:35:21.573260    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:35:21.577925    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:35:21.577935    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:35:21.611537    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:35:21.611548    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:35:21.636651    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:35:21.636662    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:35:21.650816    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:35:21.650825    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:35:21.662112    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:35:21.662121    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:35:21.686478    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:35:21.686485    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:35:24.205905    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:35:29.208651    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:35:29.209050    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:35:29.247940    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:35:29.248071    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:35:29.270206    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:35:29.270304    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:35:29.284993    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:35:29.285069    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:35:29.302247    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:35:29.302323    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:35:29.312980    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:35:29.313044    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:35:29.329098    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:35:29.329160    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:35:29.343225    7510 logs.go:276] 0 containers: []
	W0610 03:35:29.343236    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:35:29.343284    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:35:29.353394    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:35:29.353410    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:35:29.353415    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:35:29.366924    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:35:29.366935    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:35:29.378672    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:35:29.378683    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:35:29.402332    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:35:29.402339    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:35:29.406492    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:35:29.406498    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:35:29.419098    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:35:29.419107    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:35:29.452736    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:35:29.452749    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:35:29.469870    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:35:29.469883    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:35:29.481315    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:35:29.481327    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:35:29.492552    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:35:29.492565    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:35:29.504304    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:35:29.504313    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:35:29.541778    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:35:29.541786    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:35:29.566055    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:35:29.566066    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:35:29.580120    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:35:29.580131    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:35:29.599381    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:35:29.599390    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:35:29.616163    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:35:29.616175    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:35:29.626705    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:35:29.626716    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:35:32.140545    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:35:37.143262    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:35:37.143358    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:35:37.158616    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:35:37.158688    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:35:37.171156    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:35:37.171224    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:35:37.181256    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:35:37.181326    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:35:37.192425    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:35:37.192497    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:35:37.203138    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:35:37.203203    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:35:37.213775    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:35:37.213836    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:35:37.224024    7510 logs.go:276] 0 containers: []
	W0610 03:35:37.224036    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:35:37.224085    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:35:37.234944    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:35:37.234967    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:35:37.234973    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:35:37.246500    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:35:37.246512    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:35:37.251067    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:35:37.251073    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:35:37.265673    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:35:37.265684    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:35:37.280318    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:35:37.280330    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:35:37.291367    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:35:37.291379    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:35:37.304862    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:35:37.304873    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:35:37.316404    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:35:37.316413    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:35:37.340225    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:35:37.340232    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:35:37.376608    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:35:37.376616    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:35:37.412106    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:35:37.412118    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:35:37.437044    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:35:37.437053    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:35:37.453140    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:35:37.453152    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:35:37.464096    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:35:37.464106    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:35:37.483283    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:35:37.483296    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:35:37.501044    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:35:37.501054    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:35:37.512778    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:35:37.512789    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:35:40.025688    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:35:45.028221    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:35:45.028446    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:35:45.051089    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:35:45.051202    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:35:45.067030    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:35:45.067108    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:35:45.079965    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:35:45.080030    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:35:45.091366    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:35:45.091429    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:35:45.109971    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:35:45.110038    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:35:45.120281    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:35:45.120340    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:35:45.130441    7510 logs.go:276] 0 containers: []
	W0610 03:35:45.130453    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:35:45.130508    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:35:45.142154    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:35:45.142170    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:35:45.142176    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:35:45.153143    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:35:45.153152    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:35:45.188212    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:35:45.188223    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:35:45.211517    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:35:45.211528    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:35:45.222700    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:35:45.222711    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:35:45.233436    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:35:45.233446    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:35:45.248730    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:35:45.248742    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:35:45.260375    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:35:45.260388    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:35:45.277929    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:35:45.277939    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:35:45.315718    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:35:45.315733    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:35:45.329995    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:35:45.330005    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:35:45.344074    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:35:45.344088    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:35:45.355807    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:35:45.355817    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:35:45.367786    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:35:45.367797    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:35:45.391978    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:35:45.391988    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:35:45.396406    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:35:45.396414    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:35:45.418221    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:35:45.418233    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:35:47.931871    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:35:52.934626    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:35:52.935080    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:35:52.974990    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:35:52.975120    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:35:52.998898    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:35:52.999019    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:35:53.014115    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:35:53.014193    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:35:53.026322    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:35:53.026388    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:35:53.037491    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:35:53.037552    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:35:53.048758    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:35:53.048829    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:35:53.058912    7510 logs.go:276] 0 containers: []
	W0610 03:35:53.058923    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:35:53.058970    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:35:53.074083    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:35:53.074100    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:35:53.074106    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:35:53.088430    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:35:53.088439    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:35:53.099739    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:35:53.099750    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:35:53.134602    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:35:53.134609    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:35:53.138565    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:35:53.138574    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:35:53.152109    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:35:53.152119    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:35:53.182594    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:35:53.182605    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:35:53.193728    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:35:53.193739    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:35:53.204984    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:35:53.205038    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:35:53.223111    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:35:53.223124    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:35:53.234603    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:35:53.234613    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:35:53.247219    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:35:53.247235    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:35:53.258276    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:35:53.258289    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:35:53.269829    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:35:53.269840    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:35:53.285029    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:35:53.285039    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:35:53.316652    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:35:53.316662    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:35:53.352422    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:35:53.352432    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:35:55.876517    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:36:00.879184    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:36:00.879384    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:36:00.918200    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:36:00.918279    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:36:00.949083    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:36:00.949154    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:36:00.963892    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:36:00.963954    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:36:00.974021    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:36:00.974093    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:36:00.984070    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:36:00.984129    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:36:00.994642    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:36:00.994712    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:36:01.004867    7510 logs.go:276] 0 containers: []
	W0610 03:36:01.004884    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:36:01.004941    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:36:01.017430    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:36:01.017448    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:36:01.017454    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:36:01.042393    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:36:01.042405    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:36:01.056045    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:36:01.056056    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:36:01.071115    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:36:01.071127    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:36:01.086432    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:36:01.086443    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:36:01.098139    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:36:01.098153    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:36:01.136125    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:36:01.136137    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:36:01.150187    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:36:01.150196    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:36:01.162326    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:36:01.162337    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:36:01.173610    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:36:01.173621    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:36:01.185849    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:36:01.185859    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:36:01.190037    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:36:01.190045    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:36:01.205585    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:36:01.205595    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:36:01.223369    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:36:01.223381    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:36:01.248238    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:36:01.248246    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:36:01.285235    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:36:01.285245    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:36:01.296945    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:36:01.296956    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:36:03.813241    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:36:08.815458    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:36:08.815825    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:36:08.847568    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:36:08.847701    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:36:08.867469    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:36:08.867569    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:36:08.881640    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:36:08.881718    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:36:08.893634    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:36:08.893704    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:36:08.904335    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:36:08.904400    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:36:08.914965    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:36:08.915037    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:36:08.924791    7510 logs.go:276] 0 containers: []
	W0610 03:36:08.924801    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:36:08.924854    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:36:08.939781    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:36:08.939799    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:36:08.939804    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:36:08.950834    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:36:08.950844    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:36:08.962004    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:36:08.962018    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:36:08.979274    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:36:08.979287    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:36:08.990500    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:36:08.990514    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:36:09.002177    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:36:09.002190    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:36:09.006397    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:36:09.006403    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:36:09.040788    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:36:09.040797    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:36:09.054893    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:36:09.054906    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:36:09.079127    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:36:09.079136    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:36:09.094636    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:36:09.094647    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:36:09.110682    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:36:09.110693    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:36:09.121912    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:36:09.121923    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:36:09.135195    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:36:09.135205    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:36:09.158513    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:36:09.158519    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:36:09.170021    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:36:09.170035    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:36:09.204982    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:36:09.204990    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:36:11.717908    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:36:16.719899    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:36:16.720009    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:36:16.731753    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:36:16.731827    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:36:16.746166    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:36:16.746251    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:36:16.757541    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:36:16.757615    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:36:16.769081    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:36:16.769155    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:36:16.780951    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:36:16.781025    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:36:16.792307    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:36:16.792381    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:36:16.803235    7510 logs.go:276] 0 containers: []
	W0610 03:36:16.803247    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:36:16.803313    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:36:16.814060    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:36:16.814078    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:36:16.814084    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:36:16.830228    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:36:16.830246    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:36:16.849990    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:36:16.850001    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:36:16.866824    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:36:16.866836    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:36:16.890382    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:36:16.890390    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:36:16.902093    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:36:16.902103    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:36:16.931303    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:36:16.931317    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:36:16.943325    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:36:16.943335    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:36:16.977995    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:36:16.978005    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:36:16.982042    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:36:16.982049    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:36:17.015817    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:36:17.015827    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:36:17.033142    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:36:17.033151    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:36:17.047416    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:36:17.047425    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:36:17.058379    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:36:17.058391    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:36:17.069908    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:36:17.069918    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:36:17.083696    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:36:17.083707    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:36:17.094899    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:36:17.094908    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:36:19.608770    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:36:24.610266    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:36:24.610389    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:36:24.624471    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:36:24.624540    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:36:24.637188    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:36:24.637273    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:36:24.652535    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:36:24.652620    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:36:24.664220    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:36:24.664303    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:36:24.675416    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:36:24.675483    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:36:24.685893    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:36:24.685962    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:36:24.696696    7510 logs.go:276] 0 containers: []
	W0610 03:36:24.696708    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:36:24.696764    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:36:24.707262    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:36:24.707281    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:36:24.707287    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:36:24.743063    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:36:24.743074    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:36:24.769219    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:36:24.769232    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:36:24.780554    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:36:24.780567    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:36:24.803045    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:36:24.803058    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:36:24.840292    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:36:24.840304    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:36:24.861397    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:36:24.861409    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:36:24.873568    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:36:24.873581    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:36:24.877972    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:36:24.877978    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:36:24.891936    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:36:24.891947    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:36:24.908211    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:36:24.908222    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:36:24.920160    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:36:24.920172    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:36:24.935170    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:36:24.935181    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:36:24.950707    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:36:24.950718    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:36:24.967833    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:36:24.967847    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:36:24.979520    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:36:24.979530    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:36:25.006221    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:36:25.006236    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:36:27.520349    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:36:32.522501    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:36:32.522630    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:36:32.533854    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:36:32.533939    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:36:32.544821    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:36:32.544894    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:36:32.555339    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:36:32.555414    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:36:32.567786    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:36:32.567852    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:36:32.578361    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:36:32.578430    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:36:32.589125    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:36:32.589198    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:36:32.601126    7510 logs.go:276] 0 containers: []
	W0610 03:36:32.601142    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:36:32.601204    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:36:32.612419    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:36:32.612437    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:36:32.612442    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:36:32.626717    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:36:32.626727    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:36:32.641848    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:36:32.641859    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:36:32.657577    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:36:32.657588    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:36:32.669471    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:36:32.669482    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:36:32.673923    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:36:32.673930    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:36:32.708838    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:36:32.708850    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:36:32.735280    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:36:32.735297    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:36:32.756216    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:36:32.756228    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:36:32.768614    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:36:32.768625    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:36:32.780281    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:36:32.780292    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:36:32.817751    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:36:32.817765    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:36:32.832676    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:36:32.832688    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:36:32.852343    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:36:32.852355    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:36:32.864158    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:36:32.864173    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:36:32.877207    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:36:32.877223    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:36:32.889334    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:36:32.889345    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:36:35.414745    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:36:40.417126    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:36:40.417287    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:36:40.431198    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:36:40.431277    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:36:40.442582    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:36:40.442652    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:36:40.453378    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:36:40.453442    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:36:40.463397    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:36:40.463468    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:36:40.473545    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:36:40.473618    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:36:40.484282    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:36:40.484342    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:36:40.494734    7510 logs.go:276] 0 containers: []
	W0610 03:36:40.494746    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:36:40.494797    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:36:40.505404    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:36:40.505421    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:36:40.505427    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:36:40.545445    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:36:40.545459    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:36:40.569467    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:36:40.569477    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:36:40.583535    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:36:40.583549    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:36:40.595066    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:36:40.595076    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:36:40.606465    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:36:40.606476    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:36:40.622275    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:36:40.622289    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:36:40.635761    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:36:40.635774    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:36:40.640478    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:36:40.640485    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:36:40.660677    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:36:40.660690    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:36:40.672531    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:36:40.672543    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:36:40.696480    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:36:40.696489    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:36:40.735473    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:36:40.735490    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:36:40.750047    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:36:40.750061    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:36:40.771054    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:36:40.771066    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:36:40.789419    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:36:40.789430    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:36:40.801781    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:36:40.801794    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:36:43.315179    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:36:48.316270    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:36:48.316644    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:36:48.345662    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:36:48.345801    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:36:48.364381    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:36:48.364477    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:36:48.378047    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:36:48.378121    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:36:48.389958    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:36:48.390039    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:36:48.401071    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:36:48.401147    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:36:48.411884    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:36:48.411952    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:36:48.421957    7510 logs.go:276] 0 containers: []
	W0610 03:36:48.421970    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:36:48.422034    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:36:48.439024    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:36:48.439041    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:36:48.439047    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:36:48.476098    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:36:48.476109    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:36:48.490096    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:36:48.490108    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:36:48.501002    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:36:48.501012    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:36:48.512641    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:36:48.512655    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:36:48.528706    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:36:48.528719    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:36:48.540555    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:36:48.540567    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:36:48.563817    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:36:48.563824    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:36:48.568559    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:36:48.568565    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:36:48.593018    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:36:48.593031    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:36:48.606689    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:36:48.606700    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:36:48.638800    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:36:48.638812    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:36:48.651405    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:36:48.651414    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:36:48.663370    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:36:48.663380    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:36:48.700625    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:36:48.700632    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:36:48.715093    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:36:48.715103    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:36:48.726949    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:36:48.726960    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:36:51.240597    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:36:56.242823    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:36:56.242944    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:36:56.257794    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:36:56.257871    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:36:56.268355    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:36:56.268425    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:36:56.283214    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:36:56.283281    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:36:56.293629    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:36:56.293688    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:36:56.305979    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:36:56.306043    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:36:56.318547    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:36:56.318621    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:36:56.335434    7510 logs.go:276] 0 containers: []
	W0610 03:36:56.335449    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:36:56.335509    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:36:56.346230    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:36:56.346251    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:36:56.346256    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:36:56.384840    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:36:56.384857    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:36:56.397072    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:36:56.397086    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:36:56.408408    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:36:56.408423    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:36:56.428940    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:36:56.428950    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:36:56.433284    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:36:56.433290    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:36:56.445400    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:36:56.445409    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:36:56.457055    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:36:56.457064    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:36:56.473228    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:36:56.473240    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:36:56.484869    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:36:56.484885    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:36:56.519946    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:36:56.519954    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:36:56.534235    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:36:56.534244    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:36:56.548045    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:36:56.548055    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:36:56.570107    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:36:56.570118    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:36:56.586909    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:36:56.586928    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:36:56.598602    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:36:56.598615    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:36:56.623293    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:36:56.623307    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:36:59.149140    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:04.151367    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:04.151489    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:37:04.166118    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:37:04.166196    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:37:04.182518    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:37:04.182596    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:37:04.197108    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:37:04.197181    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:37:04.207756    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:37:04.207824    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:37:04.218153    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:37:04.218216    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:37:04.229740    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:37:04.229813    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:37:04.239897    7510 logs.go:276] 0 containers: []
	W0610 03:37:04.239908    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:37:04.239960    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:37:04.250753    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:37:04.250772    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:37:04.250779    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:37:04.287872    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:37:04.287880    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:37:04.292096    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:37:04.292102    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:37:04.307964    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:37:04.307974    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:37:04.318934    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:37:04.318944    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:37:04.331638    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:37:04.331648    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:37:04.348579    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:37:04.348589    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:37:04.365062    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:37:04.365074    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:37:04.382188    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:37:04.382197    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:37:04.394475    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:37:04.394484    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:37:04.412192    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:37:04.412202    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:37:04.423908    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:37:04.423917    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:37:04.459024    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:37:04.459035    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:37:04.485409    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:37:04.485424    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:37:04.500147    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:37:04.500163    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:37:04.512393    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:37:04.512403    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:37:04.524179    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:37:04.524195    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:37:07.050970    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:12.052966    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:12.053153    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:37:12.070198    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:37:12.070290    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:37:12.090170    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:37:12.090245    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:37:12.100978    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:37:12.101044    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:37:12.112584    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:37:12.112654    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:37:12.126734    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:37:12.126800    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:37:12.138069    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:37:12.138251    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:37:12.148398    7510 logs.go:276] 0 containers: []
	W0610 03:37:12.148409    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:37:12.148459    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:37:12.158974    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:37:12.158993    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:37:12.158998    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:37:12.171227    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:37:12.171237    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:37:12.175829    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:37:12.175836    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:37:12.193149    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:37:12.193161    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:37:12.205318    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:37:12.205331    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:37:12.242871    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:37:12.242882    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:37:12.255071    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:37:12.255082    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:37:12.267161    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:37:12.267172    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:37:12.292919    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:37:12.292928    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:37:12.307607    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:37:12.307618    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:37:12.346220    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:37:12.346228    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:37:12.372723    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:37:12.372733    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:37:12.387358    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:37:12.387370    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:37:12.401391    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:37:12.401402    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:37:12.417382    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:37:12.417392    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:37:12.435886    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:37:12.435896    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:37:12.450954    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:37:12.450966    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:37:14.968300    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:19.970886    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:19.971014    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:37:19.985002    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:37:19.985075    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:37:19.996794    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:37:19.996860    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:37:20.008021    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:37:20.008092    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:37:20.019361    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:37:20.019433    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:37:20.029648    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:37:20.029716    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:37:20.041359    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:37:20.041421    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:37:20.052800    7510 logs.go:276] 0 containers: []
	W0610 03:37:20.052813    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:37:20.052864    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:37:20.064554    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:37:20.064570    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:37:20.064574    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:37:20.080618    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:37:20.080626    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:37:20.096181    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:37:20.096190    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:37:20.107289    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:37:20.107299    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:37:20.119068    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:37:20.119082    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:37:20.136959    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:37:20.136971    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:37:20.203416    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:37:20.203425    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:37:20.229820    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:37:20.229834    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:37:20.242300    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:37:20.242314    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:37:20.260099    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:37:20.260109    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:37:20.276116    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:37:20.276126    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:37:20.288917    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:37:20.288928    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:37:20.301959    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:37:20.301971    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:37:20.326118    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:37:20.326136    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:37:20.363344    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:37:20.363358    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:37:20.367985    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:37:20.367992    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:37:20.382759    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:37:20.382771    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:37:22.899040    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:27.899835    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:27.903731    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:37:27.918698    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:37:27.918771    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:37:27.932287    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:37:27.932345    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:37:27.943683    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:37:27.943738    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:37:27.955004    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:37:27.955075    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:37:27.966519    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:37:27.966581    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:37:27.982731    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:37:27.982796    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:37:27.993523    7510 logs.go:276] 0 containers: []
	W0610 03:37:27.993534    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:37:27.993594    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:37:28.004730    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:37:28.004759    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:37:28.004765    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:37:28.009398    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:37:28.009409    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:37:28.023649    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:37:28.023665    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:37:28.040985    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:37:28.041002    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:37:28.053144    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:37:28.053156    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:37:28.089616    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:37:28.089631    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:37:28.101882    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:37:28.101893    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:37:28.113809    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:37:28.113824    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:37:28.126316    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:37:28.126330    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:37:28.162167    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:37:28.162174    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:37:28.187995    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:37:28.188007    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:37:28.199607    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:37:28.199617    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:37:28.211162    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:37:28.211172    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:37:28.234439    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:37:28.234450    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:37:28.246526    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:37:28.246537    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:37:28.261227    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:37:28.261236    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:37:28.276784    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:37:28.276801    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:37:30.796174    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:35.798639    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:35.798833    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:37:35.818790    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:37:35.818873    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:37:35.832460    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:37:35.832529    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:37:35.843788    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:37:35.843859    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:37:35.855703    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:37:35.855779    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:37:35.865960    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:37:35.866032    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:37:35.876745    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:37:35.876813    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:37:35.887046    7510 logs.go:276] 0 containers: []
	W0610 03:37:35.887057    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:37:35.887114    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:37:35.902679    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:37:35.902696    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:37:35.902701    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:37:35.916417    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:37:35.916428    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:37:35.941207    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:37:35.941218    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:37:35.955889    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:37:35.955900    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:37:35.967832    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:37:35.967844    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:37:35.983966    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:37:35.983977    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:37:36.002056    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:37:36.002067    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:37:36.017890    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:37:36.017900    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:37:36.053225    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:37:36.053236    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:37:36.064958    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:37:36.064968    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:37:36.079047    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:37:36.079059    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:37:36.090722    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:37:36.090732    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:37:36.113590    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:37:36.113597    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:37:36.150527    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:37:36.150535    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:37:36.162006    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:37:36.162019    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:37:36.167692    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:37:36.167701    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:37:36.178820    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:37:36.178834    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:37:38.697826    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:43.700001    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:43.700173    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:37:43.714820    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:37:43.714899    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:37:43.726196    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:37:43.726271    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:37:43.736841    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:37:43.736915    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:37:43.747618    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:37:43.747690    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:37:43.758236    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:37:43.758306    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:37:43.768879    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:37:43.768948    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:37:43.779150    7510 logs.go:276] 0 containers: []
	W0610 03:37:43.779161    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:37:43.779220    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:37:43.789504    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:37:43.789522    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:37:43.789528    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:37:43.801330    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:37:43.801342    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:37:43.813150    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:37:43.813161    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:37:43.824841    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:37:43.824857    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:37:43.836437    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:37:43.836473    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:37:43.848184    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:37:43.848196    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:37:43.865895    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:37:43.865909    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:37:43.881565    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:37:43.881579    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:37:43.902804    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:37:43.902814    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:37:43.916135    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:37:43.916147    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:37:43.927990    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:37:43.928011    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:37:43.951186    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:37:43.951200    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:37:43.975740    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:37:43.975752    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:37:43.990207    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:37:43.990219    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:37:44.001255    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:37:44.001268    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:37:44.036857    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:37:44.036866    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:37:44.041032    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:37:44.041042    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:37:46.577448    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:51.579562    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:51.579828    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:37:51.594892    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:37:51.594969    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:37:51.610810    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:37:51.610885    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:37:51.621987    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:37:51.622054    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:37:51.633033    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:37:51.633100    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:37:51.646128    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:37:51.646194    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:37:51.656552    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:37:51.656614    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:37:51.666749    7510 logs.go:276] 0 containers: []
	W0610 03:37:51.666761    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:37:51.666821    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:37:51.677773    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:37:51.677791    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:37:51.677797    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:37:51.696437    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:37:51.696450    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:37:51.707798    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:37:51.707808    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:37:51.744663    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:37:51.744679    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:37:51.782733    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:37:51.782746    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:37:51.807168    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:37:51.807179    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:37:51.818918    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:37:51.818930    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:37:51.842066    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:37:51.842081    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:37:51.853996    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:37:51.854007    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:37:51.867836    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:37:51.867848    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:37:51.880093    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:37:51.880104    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:37:51.896586    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:37:51.896596    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:37:51.908179    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:37:51.908188    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:37:51.913046    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:37:51.913052    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:37:51.931125    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:37:51.931136    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:37:51.945543    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:37:51.945557    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:37:51.957137    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:37:51.957146    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:37:54.469950    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:59.471044    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:59.471089    7510 kubeadm.go:591] duration metric: took 4m4.218128208s to restartPrimaryControlPlane
	W0610 03:37:59.471143    7510 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0610 03:37:59.471167    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0610 03:38:00.447631    7510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 03:38:00.452436    7510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 03:38:00.455086    7510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 03:38:00.457838    7510 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 03:38:00.457847    7510 kubeadm.go:156] found existing configuration files:
	
	I0610 03:38:00.457868    7510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/admin.conf
	I0610 03:38:00.460806    7510 kubeadm.go:162] "https://control-plane.minikube.internal:51096" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 03:38:00.460827    7510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 03:38:00.463635    7510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/kubelet.conf
	I0610 03:38:00.465996    7510 kubeadm.go:162] "https://control-plane.minikube.internal:51096" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 03:38:00.466019    7510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 03:38:00.468975    7510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/controller-manager.conf
	I0610 03:38:00.471750    7510 kubeadm.go:162] "https://control-plane.minikube.internal:51096" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 03:38:00.471772    7510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 03:38:00.474199    7510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/scheduler.conf
	I0610 03:38:00.477106    7510 kubeadm.go:162] "https://control-plane.minikube.internal:51096" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 03:38:00.477125    7510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 03:38:00.479881    7510 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 03:38:00.497140    7510 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0610 03:38:00.497210    7510 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 03:38:00.549423    7510 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 03:38:00.549476    7510 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 03:38:00.549526    7510 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 03:38:00.599716    7510 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 03:38:00.603949    7510 out.go:204]   - Generating certificates and keys ...
	I0610 03:38:00.603984    7510 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 03:38:00.604015    7510 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 03:38:00.604125    7510 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 03:38:00.604250    7510 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 03:38:00.604381    7510 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 03:38:00.604414    7510 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 03:38:00.604455    7510 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 03:38:00.604537    7510 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 03:38:00.604604    7510 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 03:38:00.604653    7510 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 03:38:00.604716    7510 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 03:38:00.604752    7510 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 03:38:00.761174    7510 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 03:38:00.831120    7510 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 03:38:01.056168    7510 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 03:38:01.126354    7510 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 03:38:01.155950    7510 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 03:38:01.156252    7510 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 03:38:01.156296    7510 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 03:38:01.256765    7510 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 03:38:01.259914    7510 out.go:204]   - Booting up control plane ...
	I0610 03:38:01.259958    7510 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 03:38:01.259993    7510 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 03:38:01.260021    7510 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 03:38:01.260086    7510 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 03:38:01.263561    7510 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 03:38:06.265506    7510 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.001727 seconds
	I0610 03:38:06.265565    7510 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 03:38:06.269150    7510 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 03:38:06.790809    7510 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 03:38:06.791289    7510 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-479000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 03:38:07.303059    7510 kubeadm.go:309] [bootstrap-token] Using token: ugvriq.mt0pdmq79dand9fb
	I0610 03:38:07.307564    7510 out.go:204]   - Configuring RBAC rules ...
	I0610 03:38:07.307633    7510 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 03:38:07.309688    7510 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 03:38:07.317446    7510 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 03:38:07.318281    7510 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 03:38:07.319099    7510 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 03:38:07.320033    7510 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 03:38:07.323115    7510 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 03:38:07.501102    7510 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 03:38:07.711389    7510 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 03:38:07.711758    7510 kubeadm.go:309] 
	I0610 03:38:07.711790    7510 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 03:38:07.711794    7510 kubeadm.go:309] 
	I0610 03:38:07.711847    7510 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 03:38:07.711853    7510 kubeadm.go:309] 
	I0610 03:38:07.711867    7510 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 03:38:07.711894    7510 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 03:38:07.711924    7510 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 03:38:07.711931    7510 kubeadm.go:309] 
	I0610 03:38:07.711966    7510 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 03:38:07.711970    7510 kubeadm.go:309] 
	I0610 03:38:07.711992    7510 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 03:38:07.711995    7510 kubeadm.go:309] 
	I0610 03:38:07.712018    7510 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 03:38:07.712064    7510 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 03:38:07.712107    7510 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 03:38:07.712110    7510 kubeadm.go:309] 
	I0610 03:38:07.712146    7510 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 03:38:07.712177    7510 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 03:38:07.712179    7510 kubeadm.go:309] 
	I0610 03:38:07.712214    7510 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ugvriq.mt0pdmq79dand9fb \
	I0610 03:38:07.712257    7510 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f7fe6ae71b856fd6b6179c41fff2157e8fd728e5d925a1fc919a0499149ebdbb \
	I0610 03:38:07.712267    7510 kubeadm.go:309] 	--control-plane 
	I0610 03:38:07.712271    7510 kubeadm.go:309] 
	I0610 03:38:07.712308    7510 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 03:38:07.712312    7510 kubeadm.go:309] 
	I0610 03:38:07.712348    7510 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ugvriq.mt0pdmq79dand9fb \
	I0610 03:38:07.712408    7510 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f7fe6ae71b856fd6b6179c41fff2157e8fd728e5d925a1fc919a0499149ebdbb 
	I0610 03:38:07.712464    7510 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 03:38:07.712471    7510 cni.go:84] Creating CNI manager for ""
	I0610 03:38:07.712479    7510 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:38:07.716717    7510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 03:38:07.723735    7510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 03:38:07.726593    7510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 03:38:07.733053    7510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 03:38:07.733105    7510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 03:38:07.733105    7510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-479000 minikube.k8s.io/updated_at=2024_06_10T03_38_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=running-upgrade-479000 minikube.k8s.io/primary=true
	I0610 03:38:07.778341    7510 ops.go:34] apiserver oom_adj: -16
	I0610 03:38:07.778444    7510 kubeadm.go:1107] duration metric: took 45.388333ms to wait for elevateKubeSystemPrivileges
	W0610 03:38:07.778463    7510 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 03:38:07.778467    7510 kubeadm.go:393] duration metric: took 4m12.540072625s to StartCluster
	I0610 03:38:07.778476    7510 settings.go:142] acquiring lock: {Name:mke35f292ed93eff7117a159773dd0e114b7dd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:38:07.778631    7510 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:38:07.779011    7510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/kubeconfig: {Name:mke25032b58aa44d6357ccc49c0a5254f131209e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:38:07.779213    7510 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:38:07.782817    7510 out.go:177] * Verifying Kubernetes components...
	I0610 03:38:07.779239    7510 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 03:38:07.779329    7510 config.go:182] Loaded profile config "running-upgrade-479000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 03:38:07.789755    7510 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-479000"
	I0610 03:38:07.789767    7510 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-479000"
	I0610 03:38:07.789769    7510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0610 03:38:07.789770    7510 addons.go:243] addon storage-provisioner should already be in state true
	I0610 03:38:07.789800    7510 host.go:66] Checking if "running-upgrade-479000" exists ...
	I0610 03:38:07.789769    7510 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-479000"
	I0610 03:38:07.789824    7510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-479000"
	I0610 03:38:07.790832    7510 kapi.go:59] client config for running-upgrade-479000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/client.key", CAFile:"/Users/jenkins/minikube-integration/19046-4812/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104358460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 03:38:07.790982    7510 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-479000"
	W0610 03:38:07.790987    7510 addons.go:243] addon default-storageclass should already be in state true
	I0610 03:38:07.791000    7510 host.go:66] Checking if "running-upgrade-479000" exists ...
	I0610 03:38:07.795567    7510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 03:38:07.799702    7510 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 03:38:07.799707    7510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 03:38:07.799714    7510 sshutil.go:53] new ssh client: &{IP:localhost Port:51064 SSHKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/running-upgrade-479000/id_rsa Username:docker}
	I0610 03:38:07.800329    7510 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 03:38:07.800335    7510 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 03:38:07.800339    7510 sshutil.go:53] new ssh client: &{IP:localhost Port:51064 SSHKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/running-upgrade-479000/id_rsa Username:docker}
	I0610 03:38:07.880363    7510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 03:38:07.885093    7510 api_server.go:52] waiting for apiserver process to appear ...
	I0610 03:38:07.885139    7510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 03:38:07.889515    7510 api_server.go:72] duration metric: took 110.291916ms to wait for apiserver process to appear ...
	I0610 03:38:07.889524    7510 api_server.go:88] waiting for apiserver healthz status ...
	I0610 03:38:07.889531    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:07.903441    7510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 03:38:07.907958    7510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 03:38:12.890121    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:12.890141    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:17.891444    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:17.891484    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:22.891933    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:22.891967    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:27.892273    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:27.892295    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:32.892917    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:32.892936    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:37.893488    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:37.893523    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0610 03:38:38.246907    7510 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0610 03:38:38.251721    7510 out.go:177] * Enabled addons: storage-provisioner
	I0610 03:38:38.258621    7510 addons.go:510] duration metric: took 30.479881584s for enable addons: enabled=[storage-provisioner]
	I0610 03:38:42.894607    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:42.894647    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:47.895558    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:47.895580    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:52.896904    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:52.896936    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:57.898728    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:57.903080    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:02.904116    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:02.905081    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:07.906941    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:07.907050    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:07.924247    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:39:07.924318    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:07.935741    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:39:07.935813    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:07.946376    7510 logs.go:276] 2 containers: [89f9105df8f8 5fc29143e075]
	I0610 03:39:07.946439    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:07.957044    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:39:07.957111    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:07.967207    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:39:07.967274    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:07.979457    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:39:07.979525    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:07.990079    7510 logs.go:276] 0 containers: []
	W0610 03:39:07.990093    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:07.990155    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:08.000199    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:39:08.000215    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:39:08.000221    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:39:08.017062    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:39:08.017071    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:39:08.028319    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:08.028329    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:08.060756    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:08.060764    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:08.097283    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:39:08.097295    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:39:08.112492    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:39:08.112502    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:39:08.126938    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:39:08.126954    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:39:08.141198    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:39:08.141211    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:08.153471    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:08.153481    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:08.158068    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:39:08.158073    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:39:08.170159    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:39:08.170174    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:39:08.181644    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:39:08.181653    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:39:08.197066    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:08.197076    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:10.723591    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:15.726003    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:15.726177    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:15.737180    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:39:15.737252    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:15.747196    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:39:15.747262    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:15.757821    7510 logs.go:276] 2 containers: [89f9105df8f8 5fc29143e075]
	I0610 03:39:15.757886    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:15.768558    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:39:15.768623    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:15.778995    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:39:15.779070    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:15.789352    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:39:15.789423    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:15.799782    7510 logs.go:276] 0 containers: []
	W0610 03:39:15.799794    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:15.799855    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:15.810089    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:39:15.810104    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:15.810109    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:15.844500    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:15.844509    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:15.849045    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:39:15.849054    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:39:15.864540    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:39:15.864550    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:39:15.875723    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:39:15.875733    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:39:15.893320    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:39:15.893331    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:39:15.905336    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:15.905348    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:15.929927    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:39:15.929935    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:15.941204    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:15.941216    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:15.976459    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:39:15.976474    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:39:15.998719    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:39:15.998730    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:39:16.015677    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:39:16.015687    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:39:16.030308    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:39:16.030316    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:39:18.549918    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:23.552545    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:23.552722    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:23.576583    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:39:23.576659    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:23.590405    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:39:23.590483    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:23.601668    7510 logs.go:276] 2 containers: [89f9105df8f8 5fc29143e075]
	I0610 03:39:23.601732    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:23.612292    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:39:23.612363    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:23.622495    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:39:23.622565    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:23.634041    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:39:23.634106    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:23.644352    7510 logs.go:276] 0 containers: []
	W0610 03:39:23.644364    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:23.644422    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:23.654944    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:39:23.654959    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:23.654965    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:23.689291    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:23.689301    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:23.693947    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:23.693956    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:23.730005    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:39:23.730015    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:39:23.744611    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:39:23.744622    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:39:23.756905    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:39:23.756917    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:39:23.769058    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:39:23.769073    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:39:23.786658    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:39:23.786673    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:39:23.802210    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:39:23.802225    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:39:23.813838    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:39:23.813853    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:39:23.831372    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:39:23.831382    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:39:23.843339    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:23.843350    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:23.866144    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:39:23.866152    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:26.379477    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:31.381876    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:31.382325    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:31.421046    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:39:31.421207    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:31.443877    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:39:31.443994    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:31.460157    7510 logs.go:276] 2 containers: [89f9105df8f8 5fc29143e075]
	I0610 03:39:31.460238    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:31.473858    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:39:31.473917    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:31.484584    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:39:31.484662    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:31.496145    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:39:31.496208    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:31.512974    7510 logs.go:276] 0 containers: []
	W0610 03:39:31.512992    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:31.513042    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:31.523459    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:39:31.523474    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:39:31.523479    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:31.534894    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:31.534905    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:31.570452    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:31.570468    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:31.610013    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:39:31.610026    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:39:31.624237    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:39:31.624254    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:39:31.636373    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:39:31.636387    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:39:31.651186    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:39:31.651197    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:39:31.667345    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:31.667356    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:31.691580    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:31.691589    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:31.696158    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:39:31.696165    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:39:31.710491    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:39:31.710501    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:39:31.721739    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:39:31.721750    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:39:31.733629    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:39:31.733639    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:39:34.251818    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:39.354512    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:39.354655    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:39.368283    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:39:39.368362    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:39.379135    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:39:39.379204    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:39.390025    7510 logs.go:276] 2 containers: [89f9105df8f8 5fc29143e075]
	I0610 03:39:39.390094    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:39.400494    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:39:39.400565    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:39.410487    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:39:39.410566    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:39.421573    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:39:39.421641    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:39.432858    7510 logs.go:276] 0 containers: []
	W0610 03:39:39.432870    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:39.432921    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:39.443263    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:39:39.443280    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:39:39.443285    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:39:39.455142    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:39:39.455152    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:39:39.469782    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:39:39.469790    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:39:39.490530    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:39:39.490541    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:39:39.502123    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:39.502131    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:39.536508    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:39.536516    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:39.571464    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:39:39.571473    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:39:39.583324    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:39:39.583334    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:39:39.594852    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:39.594862    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:39.617734    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:39:39.617742    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:39.629044    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:39.629053    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:39.633792    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:39:39.633799    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:39:39.647779    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:39:39.647790    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:39:42.163552    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:47.165939    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:47.166224    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:47.191563    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:39:47.191663    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:47.208082    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:39:47.208155    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:47.221494    7510 logs.go:276] 2 containers: [89f9105df8f8 5fc29143e075]
	I0610 03:39:47.221567    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:47.233274    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:39:47.233346    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:47.243814    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:39:47.243885    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:47.254391    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:39:47.254455    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:47.264997    7510 logs.go:276] 0 containers: []
	W0610 03:39:47.265010    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:47.265056    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:47.275382    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:39:47.275400    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:47.275405    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:47.307875    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:47.307884    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:47.315185    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:47.315193    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:47.353532    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:39:47.353546    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:39:47.365627    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:39:47.365638    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:39:47.377770    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:39:47.377781    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:39:47.395511    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:47.395522    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:47.419875    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:39:47.419884    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:39:47.434007    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:39:47.434016    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:39:47.448197    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:39:47.448206    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:39:47.463307    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:39:47.463318    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:39:47.480902    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:39:47.480913    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:39:47.492418    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:39:47.492429    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:50.006131    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:55.008249    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:55.008481    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:55.030224    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:39:55.030322    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:55.045127    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:39:55.045204    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:55.057775    7510 logs.go:276] 2 containers: [89f9105df8f8 5fc29143e075]
	I0610 03:39:55.057856    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:55.068252    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:39:55.068320    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:55.078613    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:39:55.078682    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:55.089220    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:39:55.089289    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:55.099649    7510 logs.go:276] 0 containers: []
	W0610 03:39:55.099659    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:55.099720    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:55.110209    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:39:55.110224    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:39:55.110229    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:39:55.121824    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:39:55.121834    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:39:55.133695    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:39:55.133705    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:55.145108    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:39:55.145119    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:39:55.159542    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:39:55.159556    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:39:55.175249    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:55.175260    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:55.210015    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:39:55.210026    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:39:55.224822    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:39:55.224833    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:39:55.239394    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:39:55.239403    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:39:55.250964    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:39:55.250975    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:39:55.268748    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:55.268757    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:55.293779    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:55.293788    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:55.328327    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:55.328336    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:57.835128    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:02.837698    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:02.838064    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:02.868109    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:40:02.868243    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:02.887600    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:40:02.887685    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:02.902825    7510 logs.go:276] 2 containers: [89f9105df8f8 5fc29143e075]
	I0610 03:40:02.902907    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:02.919460    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:40:02.919532    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:02.930367    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:40:02.930439    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:02.940677    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:40:02.940743    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:02.954944    7510 logs.go:276] 0 containers: []
	W0610 03:40:02.954956    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:02.955013    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:02.966132    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:40:02.966151    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:40:02.966157    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:40:02.981280    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:40:02.981293    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:40:02.992983    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:40:02.992994    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:03.004713    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:03.004723    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:03.042363    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:40:03.042375    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:40:03.054264    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:40:03.054275    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:40:03.065968    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:40:03.065979    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:40:03.086245    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:40:03.086258    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:40:03.097980    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:40:03.097992    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:40:03.116017    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:03.116030    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:03.140673    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:03.140683    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:03.175036    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:03.175045    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:03.179516    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:40:03.179524    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:40:05.696176    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:10.698522    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:10.698780    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:10.726107    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:40:10.726228    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:10.743404    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:40:10.743492    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:10.757278    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:40:10.757352    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:10.768967    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:40:10.769024    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:10.779153    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:40:10.779224    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:10.789358    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:40:10.789429    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:10.803335    7510 logs.go:276] 0 containers: []
	W0610 03:40:10.803344    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:10.803395    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:10.813844    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:40:10.813863    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:10.813868    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:10.848286    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:10.848295    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:10.853052    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:40:10.853058    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:40:10.867471    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:10.867483    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:10.891446    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:40:10.891452    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:40:10.903452    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:40:10.903467    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:40:10.915290    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:40:10.915301    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:40:10.927576    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:40:10.927587    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:40:10.942573    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:40:10.942582    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:40:10.959368    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:40:10.959379    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:40:10.970394    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:40:10.970407    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:40:10.981911    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:40:10.981923    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:10.996883    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:10.996896    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:11.031356    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:40:11.031370    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:40:11.045117    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:40:11.045128    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:40:13.557992    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:18.560461    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:18.560812    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:18.590937    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:40:18.591069    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:18.609231    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:40:18.609334    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:18.622868    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:40:18.622945    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:18.634157    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:40:18.634235    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:18.644154    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:40:18.644220    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:18.654955    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:40:18.655023    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:18.665000    7510 logs.go:276] 0 containers: []
	W0610 03:40:18.665011    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:18.665071    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:18.675617    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:40:18.675637    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:40:18.675642    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:40:18.686994    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:40:18.687008    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:40:18.706761    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:40:18.706772    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:40:18.722680    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:18.722691    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:18.747839    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:18.747847    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:18.752708    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:18.752716    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:18.787552    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:40:18.787568    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:40:18.801433    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:40:18.801444    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:18.813273    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:40:18.813284    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:40:18.825629    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:40:18.825642    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:40:18.837198    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:40:18.837214    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:40:18.855007    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:40:18.855016    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:40:18.869893    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:18.869903    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:18.902853    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:40:18.902864    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:40:18.916923    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:40:18.916932    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:40:21.430146    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:26.432574    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:26.432853    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:26.461603    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:40:26.461736    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:26.480273    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:40:26.480364    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:26.494045    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:40:26.494116    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:26.505262    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:40:26.505331    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:26.515802    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:40:26.515867    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:26.526577    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:40:26.526649    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:26.543245    7510 logs.go:276] 0 containers: []
	W0610 03:40:26.543257    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:26.543314    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:26.553175    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:40:26.553191    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:40:26.553197    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:40:26.565516    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:40:26.565528    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:40:26.577260    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:40:26.577274    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:40:26.592125    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:40:26.592137    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:40:26.609847    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:40:26.609856    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:40:26.624277    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:26.624290    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:26.658574    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:40:26.658585    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:40:26.676203    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:40:26.676214    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:26.688478    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:26.688489    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:26.722555    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:26.722562    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:26.747126    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:40:26.747133    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:40:26.764685    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:40:26.764696    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:40:26.776219    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:40:26.776229    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:40:26.787617    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:40:26.787628    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:40:26.799216    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:26.799231    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:29.305735    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:34.308004    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:34.308165    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:34.324131    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:40:34.324202    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:34.336835    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:40:34.336905    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:34.348790    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:40:34.348863    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:34.359182    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:40:34.359244    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:34.371092    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:40:34.371155    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:34.381431    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:40:34.381497    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:34.391772    7510 logs.go:276] 0 containers: []
	W0610 03:40:34.391784    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:34.391840    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:34.404998    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:40:34.405017    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:34.405022    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:34.409482    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:34.409491    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:34.443945    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:34.443956    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:34.468728    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:40:34.468735    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:40:34.479776    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:40:34.479786    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:40:34.492037    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:40:34.492047    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:40:34.507069    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:40:34.507077    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:40:34.530135    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:40:34.530145    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:40:34.544185    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:40:34.544195    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:40:34.556188    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:40:34.556199    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:40:34.568026    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:40:34.568036    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:40:34.585684    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:34.585694    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:34.619701    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:40:34.619707    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:40:34.635382    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:40:34.635393    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:40:34.648024    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:40:34.648034    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:37.170132    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:42.172613    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:42.172950    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:42.206583    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:40:42.206717    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:42.226360    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:40:42.226454    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:42.241263    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:40:42.241353    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:42.253767    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:40:42.253837    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:42.264484    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:40:42.264550    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:42.275561    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:40:42.275636    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:42.287107    7510 logs.go:276] 0 containers: []
	W0610 03:40:42.287117    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:42.287172    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:42.297735    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:40:42.297756    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:42.297761    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:42.302274    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:40:42.302282    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:40:42.317118    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:40:42.317129    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:40:42.334270    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:42.334281    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:42.359547    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:40:42.359558    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:40:42.374460    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:40:42.374470    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:40:42.386327    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:40:42.386343    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:40:42.403858    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:40:42.403869    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:42.415783    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:42.415795    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:42.450066    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:42.450073    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:42.484190    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:40:42.484201    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:40:42.498986    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:40:42.498997    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:40:42.510788    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:40:42.510799    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:40:42.527296    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:40:42.527306    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:40:42.539052    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:40:42.539062    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:40:45.052870    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:50.055170    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:50.055412    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:50.072297    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:40:50.072371    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:50.086139    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:40:50.086212    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:50.097290    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:40:50.097356    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:50.114534    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:40:50.114606    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:50.124995    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:40:50.125065    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:50.135271    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:40:50.135340    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:50.146001    7510 logs.go:276] 0 containers: []
	W0610 03:40:50.146012    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:50.146071    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:50.156566    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:40:50.156582    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:50.156587    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:50.190760    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:40:50.190773    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:40:50.204938    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:40:50.204951    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:40:50.216363    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:50.216377    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:50.239839    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:50.239846    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:50.244137    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:40:50.244147    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:40:50.256358    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:40:50.256369    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:40:50.271180    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:40:50.271193    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:40:50.290332    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:40:50.290346    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:40:50.309771    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:40:50.309781    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:40:50.321165    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:40:50.321177    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:40:50.332634    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:50.332646    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:50.367738    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:40:50.367753    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:40:50.389364    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:40:50.389375    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:50.401264    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:40:50.401273    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:40:52.919186    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:57.921516    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:57.921656    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:57.932072    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:40:57.932143    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:57.946357    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:40:57.946425    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:57.956873    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:40:57.956937    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:57.967414    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:40:57.967476    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:57.978199    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:40:57.978267    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:57.988659    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:40:57.988720    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:58.004302    7510 logs.go:276] 0 containers: []
	W0610 03:40:58.004503    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:58.004559    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:58.015220    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:40:58.015237    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:58.015243    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:58.020335    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:58.020342    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:58.059584    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:40:58.059595    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:40:58.074227    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:40:58.074239    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:40:58.086316    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:40:58.086326    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:40:58.097809    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:40:58.097818    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:40:58.123018    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:40:58.123031    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:40:58.134484    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:40:58.134495    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:40:58.148451    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:58.148462    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:58.173040    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:58.173049    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:58.206681    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:40:58.206692    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:40:58.218262    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:40:58.218272    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:40:58.229477    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:40:58.229487    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:40:58.240863    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:40:58.240872    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:40:58.255229    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:40:58.255241    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:00.768547    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:05.770894    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:05.771112    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:41:05.792247    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:41:05.792341    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:41:05.811910    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:41:05.811982    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:41:05.825597    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:41:05.825668    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:41:05.841370    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:41:05.841439    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:41:05.853017    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:41:05.853085    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:41:05.863546    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:41:05.863618    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:41:05.873465    7510 logs.go:276] 0 containers: []
	W0610 03:41:05.873477    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:41:05.873533    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:41:05.883806    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:41:05.883826    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:41:05.883832    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:41:05.898203    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:41:05.898213    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:41:05.910009    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:41:05.910020    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:41:05.924751    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:41:05.924761    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:41:05.942122    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:41:05.942132    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:41:05.954218    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:41:05.954235    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:05.966086    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:41:05.966101    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:41:05.998820    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:41:05.998830    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:41:06.041163    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:41:06.041175    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:41:06.052939    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:41:06.052956    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:41:06.064558    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:41:06.064571    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:41:06.088763    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:41:06.088770    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:41:06.093010    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:41:06.093016    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:41:06.109258    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:41:06.109268    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:41:06.120807    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:41:06.120816    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:41:08.634408    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:13.635903    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:13.636504    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:41:13.679593    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:41:13.679723    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:41:13.697912    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:41:13.698010    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:41:13.711937    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:41:13.712012    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:41:13.724011    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:41:13.724079    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:41:13.739484    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:41:13.739551    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:41:13.754535    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:41:13.754601    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:41:13.764377    7510 logs.go:276] 0 containers: []
	W0610 03:41:13.764387    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:41:13.764436    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:41:13.776209    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:41:13.776228    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:41:13.776233    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:41:13.788606    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:41:13.788618    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:41:13.800416    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:41:13.800427    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:41:13.812197    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:41:13.812208    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:41:13.824557    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:41:13.824567    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:41:13.839479    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:41:13.839491    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:41:13.844282    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:41:13.844289    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:41:13.878540    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:41:13.878551    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:41:13.898787    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:41:13.898797    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:41:13.910393    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:41:13.910404    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:41:13.929225    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:41:13.929237    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:41:13.940840    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:41:13.940851    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:41:13.964793    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:41:13.964802    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:13.976621    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:41:13.976636    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:41:14.009926    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:41:14.009935    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:41:16.529218    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:21.531615    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:21.531911    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:41:21.570019    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:41:21.570120    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:41:21.585465    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:41:21.585546    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:41:21.597812    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:41:21.597879    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:41:21.608622    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:41:21.608702    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:41:21.623129    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:41:21.623201    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:41:21.634501    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:41:21.634563    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:41:21.648326    7510 logs.go:276] 0 containers: []
	W0610 03:41:21.648339    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:41:21.648395    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:41:21.658787    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:41:21.658805    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:41:21.658812    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:41:21.694205    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:41:21.694218    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:41:21.730999    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:41:21.731013    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:41:21.745550    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:41:21.745562    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:41:21.756524    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:41:21.756538    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:41:21.768155    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:41:21.768169    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:41:21.785483    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:41:21.785494    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:41:21.789975    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:41:21.789984    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:21.801442    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:41:21.801456    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:41:21.814875    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:41:21.814887    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:41:21.827025    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:41:21.827037    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:41:21.838600    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:41:21.838611    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:41:21.853487    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:41:21.853497    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:41:21.865341    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:41:21.865352    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:41:21.877109    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:41:21.877121    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:41:24.404575    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:29.406917    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:29.407114    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:41:29.425505    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:41:29.425598    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:41:29.438806    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:41:29.438884    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:41:29.450201    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:41:29.450270    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:41:29.460933    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:41:29.460997    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:41:29.471628    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:41:29.471692    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:41:29.481858    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:41:29.481919    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:41:29.492797    7510 logs.go:276] 0 containers: []
	W0610 03:41:29.492815    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:41:29.492871    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:41:29.505267    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:41:29.505286    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:41:29.505292    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:41:29.520252    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:41:29.520267    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:41:29.546950    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:41:29.546961    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:29.558736    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:41:29.558750    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:41:29.579716    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:41:29.579731    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:41:29.591600    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:41:29.591614    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:41:29.603710    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:41:29.603721    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:41:29.615230    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:41:29.615244    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:41:29.626198    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:41:29.626213    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:41:29.650182    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:41:29.650190    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:41:29.682848    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:41:29.682859    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:41:29.694549    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:41:29.694564    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:41:29.705830    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:41:29.705839    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:41:29.710905    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:41:29.710912    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:41:29.745180    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:41:29.745194    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:41:32.260399    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:37.262803    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:37.262988    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:41:37.276153    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:41:37.276233    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:41:37.289973    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:41:37.290044    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:41:37.301014    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:41:37.301100    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:41:37.311856    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:41:37.311923    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:41:37.322006    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:41:37.322068    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:41:37.332492    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:41:37.332554    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:41:37.342278    7510 logs.go:276] 0 containers: []
	W0610 03:41:37.342291    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:41:37.342351    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:41:37.353296    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:41:37.353315    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:41:37.353321    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:41:37.365677    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:41:37.365690    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:41:37.377548    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:41:37.377559    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:41:37.389320    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:41:37.389335    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:41:37.408291    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:41:37.408302    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:41:37.412554    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:41:37.412560    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:41:37.446913    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:41:37.446924    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:41:37.461453    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:41:37.461462    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:41:37.476511    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:41:37.476520    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:41:37.488094    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:41:37.488108    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:37.499294    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:41:37.499304    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:41:37.512887    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:41:37.512899    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:41:37.524843    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:41:37.524853    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:41:37.548291    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:41:37.548299    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:41:37.580349    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:41:37.580358    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:41:40.100132    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:45.102487    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:45.102753    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:41:45.144427    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:41:45.144549    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:41:45.165567    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:41:45.165640    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:41:45.176744    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:41:45.176816    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:41:45.187792    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:41:45.187857    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:41:45.197988    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:41:45.198052    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:41:45.213055    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:41:45.213123    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:41:45.223642    7510 logs.go:276] 0 containers: []
	W0610 03:41:45.223654    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:41:45.223712    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:41:45.234001    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:41:45.234017    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:41:45.234023    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:41:45.246711    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:41:45.246723    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:41:45.281389    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:41:45.281402    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:41:45.296873    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:41:45.296887    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:41:45.311814    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:41:45.311828    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:41:45.328779    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:41:45.328788    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:41:45.352422    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:41:45.352453    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:41:45.387224    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:41:45.387235    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:41:45.402518    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:41:45.402532    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:41:45.414020    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:41:45.414030    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:41:45.425558    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:41:45.425571    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:41:45.439906    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:41:45.439916    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:41:45.451446    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:41:45.451460    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:41:45.463017    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:41:45.463031    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:45.474753    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:41:45.474763    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:41:47.980753    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:52.983006    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:52.983109    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:41:52.995488    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:41:52.995560    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:41:53.006052    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:41:53.006118    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:41:53.018124    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:41:53.018195    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:41:53.029171    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:41:53.029244    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:41:53.040115    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:41:53.040182    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:41:53.051054    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:41:53.051130    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:41:53.061631    7510 logs.go:276] 0 containers: []
	W0610 03:41:53.061643    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:41:53.061701    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:41:53.073585    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:41:53.073605    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:41:53.073611    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:41:53.109177    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:41:53.109199    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:41:53.125407    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:41:53.125419    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:41:53.139901    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:41:53.139914    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:41:53.159195    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:41:53.159211    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:41:53.172960    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:41:53.172973    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:53.185893    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:41:53.185910    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:41:53.191053    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:41:53.191067    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:41:53.230977    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:41:53.230993    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:41:53.244509    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:41:53.244522    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:41:53.259016    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:41:53.259029    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:41:53.284654    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:41:53.284670    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:41:53.299436    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:41:53.299448    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:41:53.314133    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:41:53.314146    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:41:53.342461    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:41:53.342473    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:41:55.856655    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:42:00.858973    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:42:00.859162    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:42:00.877328    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:42:00.877404    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:42:00.890549    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:42:00.890615    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:42:00.901654    7510 logs.go:276] 4 containers: [b082e39dd8f4 9a2bad93756d 813ec2d6967d dc4fe8f226c8]
	I0610 03:42:00.901728    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:42:00.911809    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:42:00.911882    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:42:00.922137    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:42:00.922211    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:42:00.933755    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:42:00.933823    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:42:00.943678    7510 logs.go:276] 0 containers: []
	W0610 03:42:00.943689    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:42:00.943744    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:42:00.954506    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:42:00.954523    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:42:00.954528    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:42:00.991120    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:42:00.991130    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:42:01.002949    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:42:01.002960    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:42:01.014202    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:42:01.014213    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:42:01.025762    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:42:01.025772    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:42:01.030551    7510 logs.go:123] Gathering logs for coredns [9a2bad93756d] ...
	I0610 03:42:01.030560    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2bad93756d"
	I0610 03:42:01.041979    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:42:01.041992    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:42:01.053808    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:42:01.053819    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:42:01.065479    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:42:01.065490    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:42:01.090175    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:42:01.090183    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:42:01.124115    7510 logs.go:123] Gathering logs for coredns [b082e39dd8f4] ...
	I0610 03:42:01.124122    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b082e39dd8f4"
	I0610 03:42:01.134942    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:42:01.134956    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:42:01.155105    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:42:01.155116    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:42:01.168749    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:42:01.168759    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:42:01.183054    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:42:01.183068    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:42:03.699093    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:42:08.701366    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:42:08.705736    7510 out.go:177] 
	W0610 03:42:08.709918    7510 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0610 03:42:08.709935    7510 out.go:239] * 
	* 
	W0610 03:42:08.710682    7510 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:42:08.721822    7510 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-479000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-06-10 03:42:08.815969 -0700 PDT m=+1349.358043626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-479000 -n running-upgrade-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-479000 -n running-upgrade-479000: exit status 2 (15.745952792s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-479000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-140000          | force-systemd-flag-140000 | jenkins | v1.33.1 | 10 Jun 24 03:32 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-550000              | force-systemd-env-550000  | jenkins | v1.33.1 | 10 Jun 24 03:32 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-550000           | force-systemd-env-550000  | jenkins | v1.33.1 | 10 Jun 24 03:32 PDT | 10 Jun 24 03:32 PDT |
	| start   | -p docker-flags-401000                | docker-flags-401000       | jenkins | v1.33.1 | 10 Jun 24 03:32 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-140000             | force-systemd-flag-140000 | jenkins | v1.33.1 | 10 Jun 24 03:32 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-140000          | force-systemd-flag-140000 | jenkins | v1.33.1 | 10 Jun 24 03:32 PDT | 10 Jun 24 03:32 PDT |
	| start   | -p cert-expiration-032000             | cert-expiration-032000    | jenkins | v1.33.1 | 10 Jun 24 03:32 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-401000 ssh               | docker-flags-401000       | jenkins | v1.33.1 | 10 Jun 24 03:32 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-401000 ssh               | docker-flags-401000       | jenkins | v1.33.1 | 10 Jun 24 03:32 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-401000                | docker-flags-401000       | jenkins | v1.33.1 | 10 Jun 24 03:32 PDT | 10 Jun 24 03:32 PDT |
	| start   | -p cert-options-960000                | cert-options-960000       | jenkins | v1.33.1 | 10 Jun 24 03:32 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-960000 ssh               | cert-options-960000       | jenkins | v1.33.1 | 10 Jun 24 03:32 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-960000 -- sudo        | cert-options-960000       | jenkins | v1.33.1 | 10 Jun 24 03:32 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-960000                | cert-options-960000       | jenkins | v1.33.1 | 10 Jun 24 03:32 PDT | 10 Jun 24 03:32 PDT |
	| start   | -p running-upgrade-479000             | minikube                  | jenkins | v1.26.0 | 10 Jun 24 03:32 PDT | 10 Jun 24 03:33 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-479000             | running-upgrade-479000    | jenkins | v1.33.1 | 10 Jun 24 03:33 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-032000             | cert-expiration-032000    | jenkins | v1.33.1 | 10 Jun 24 03:35 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-032000             | cert-expiration-032000    | jenkins | v1.33.1 | 10 Jun 24 03:35 PDT | 10 Jun 24 03:35 PDT |
	| start   | -p kubernetes-upgrade-122000          | kubernetes-upgrade-122000 | jenkins | v1.33.1 | 10 Jun 24 03:35 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-122000          | kubernetes-upgrade-122000 | jenkins | v1.33.1 | 10 Jun 24 03:35 PDT | 10 Jun 24 03:35 PDT |
	| start   | -p kubernetes-upgrade-122000          | kubernetes-upgrade-122000 | jenkins | v1.33.1 | 10 Jun 24 03:35 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-122000          | kubernetes-upgrade-122000 | jenkins | v1.33.1 | 10 Jun 24 03:36 PDT | 10 Jun 24 03:36 PDT |
	| start   | -p stopped-upgrade-390000             | minikube                  | jenkins | v1.26.0 | 10 Jun 24 03:36 PDT | 10 Jun 24 03:36 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-390000 stop           | minikube                  | jenkins | v1.26.0 | 10 Jun 24 03:36 PDT | 10 Jun 24 03:36 PDT |
	| start   | -p stopped-upgrade-390000             | stopped-upgrade-390000    | jenkins | v1.33.1 | 10 Jun 24 03:36 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 03:36:58
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 03:36:58.351923    7676 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:36:58.352133    7676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:36:58.352137    7676 out.go:304] Setting ErrFile to fd 2...
	I0610 03:36:58.352140    7676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:36:58.352335    7676 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:36:58.353551    7676 out.go:298] Setting JSON to false
	I0610 03:36:58.373084    7676 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5789,"bootTime":1718010029,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:36:58.373183    7676 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:36:58.378199    7676 out.go:177] * [stopped-upgrade-390000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:36:58.386112    7676 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:36:58.386153    7676 notify.go:220] Checking for updates...
	I0610 03:36:58.393032    7676 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:36:58.396165    7676 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:36:58.399098    7676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:36:58.402029    7676 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:36:58.405094    7676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:36:58.408233    7676 config.go:182] Loaded profile config "stopped-upgrade-390000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 03:36:58.411072    7676 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0610 03:36:58.414055    7676 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:36:58.417069    7676 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 03:36:58.424079    7676 start.go:297] selected driver: qemu2
	I0610 03:36:58.424087    7676 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-390000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51320 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-390000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0610 03:36:58.424157    7676 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:36:58.426650    7676 cni.go:84] Creating CNI manager for ""
	I0610 03:36:58.426673    7676 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:36:58.426704    7676 start.go:340] cluster config:
	{Name:stopped-upgrade-390000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51320 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-390000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0610 03:36:58.426758    7676 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:36:58.434077    7676 out.go:177] * Starting "stopped-upgrade-390000" primary control-plane node in "stopped-upgrade-390000" cluster
	I0610 03:36:58.438145    7676 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0610 03:36:58.438161    7676 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0610 03:36:58.438172    7676 cache.go:56] Caching tarball of preloaded images
	I0610 03:36:58.438238    7676 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:36:58.438245    7676 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0610 03:36:58.438318    7676 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/config.json ...
	I0610 03:36:58.438828    7676 start.go:360] acquireMachinesLock for stopped-upgrade-390000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:36:58.438866    7676 start.go:364] duration metric: took 31.875µs to acquireMachinesLock for "stopped-upgrade-390000"
	I0610 03:36:58.438875    7676 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:36:58.438882    7676 fix.go:54] fixHost starting: 
	I0610 03:36:58.439011    7676 fix.go:112] recreateIfNeeded on stopped-upgrade-390000: state=Stopped err=<nil>
	W0610 03:36:58.439020    7676 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:36:58.446066    7676 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-390000" ...
	I0610 03:36:59.149140    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:36:58.450130    7676 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/stopped-upgrade-390000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/stopped-upgrade-390000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/stopped-upgrade-390000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51286-:22,hostfwd=tcp::51287-:2376,hostname=stopped-upgrade-390000 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/stopped-upgrade-390000/disk.qcow2
	I0610 03:36:58.499973    7676 main.go:141] libmachine: STDOUT: 
	I0610 03:36:58.499997    7676 main.go:141] libmachine: STDERR: 
	I0610 03:36:58.500003    7676 main.go:141] libmachine: Waiting for VM to start (ssh -p 51286 docker@127.0.0.1)...
	I0610 03:37:04.151367    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:04.151489    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:37:04.166118    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:37:04.166196    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:37:04.182518    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:37:04.182596    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:37:04.197108    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:37:04.197181    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:37:04.207756    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:37:04.207824    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:37:04.218153    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:37:04.218216    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:37:04.229740    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:37:04.229813    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:37:04.239897    7510 logs.go:276] 0 containers: []
	W0610 03:37:04.239908    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:37:04.239960    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:37:04.250753    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:37:04.250772    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:37:04.250779    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:37:04.287872    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:37:04.287880    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:37:04.292096    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:37:04.292102    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:37:04.307964    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:37:04.307974    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:37:04.318934    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:37:04.318944    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:37:04.331638    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:37:04.331648    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:37:04.348579    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:37:04.348589    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:37:04.365062    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:37:04.365074    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:37:04.382188    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:37:04.382197    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:37:04.394475    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:37:04.394484    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:37:04.412192    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:37:04.412202    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:37:04.423908    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:37:04.423917    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:37:04.459024    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:37:04.459035    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:37:04.485409    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:37:04.485424    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:37:04.500147    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:37:04.500163    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:37:04.512393    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:37:04.512403    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:37:04.524179    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:37:04.524195    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:37:07.050970    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:12.052966    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:12.053153    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:37:12.070198    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:37:12.070290    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:37:12.090170    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:37:12.090245    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:37:12.100978    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:37:12.101044    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:37:12.112584    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:37:12.112654    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:37:12.126734    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:37:12.126800    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:37:12.138069    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:37:12.138251    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:37:12.148398    7510 logs.go:276] 0 containers: []
	W0610 03:37:12.148409    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:37:12.148459    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:37:12.158974    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:37:12.158993    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:37:12.158998    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:37:12.171227    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:37:12.171237    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:37:12.175829    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:37:12.175836    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:37:12.193149    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:37:12.193161    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:37:12.205318    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:37:12.205331    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:37:12.242871    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:37:12.242882    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:37:12.255071    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:37:12.255082    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:37:12.267161    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:37:12.267172    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:37:12.292919    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:37:12.292928    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:37:12.307607    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:37:12.307618    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:37:12.346220    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:37:12.346228    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:37:12.372723    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:37:12.372733    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:37:12.387358    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:37:12.387370    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:37:12.401391    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:37:12.401402    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:37:12.417382    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:37:12.417392    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:37:12.435886    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:37:12.435896    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:37:12.450954    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:37:12.450966    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:37:14.968300    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:18.814368    7676 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/config.json ...
	I0610 03:37:18.815225    7676 machine.go:94] provisionDockerMachine start ...
	I0610 03:37:18.815429    7676 main.go:141] libmachine: Using SSH client type: native
	I0610 03:37:18.816005    7676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bf2980] 0x100bf51e0 <nil>  [] 0s} localhost 51286 <nil> <nil>}
	I0610 03:37:18.816021    7676 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 03:37:18.913821    7676 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 03:37:18.913867    7676 buildroot.go:166] provisioning hostname "stopped-upgrade-390000"
	I0610 03:37:18.914033    7676 main.go:141] libmachine: Using SSH client type: native
	I0610 03:37:18.914359    7676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bf2980] 0x100bf51e0 <nil>  [] 0s} localhost 51286 <nil> <nil>}
	I0610 03:37:18.914374    7676 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-390000 && echo "stopped-upgrade-390000" | sudo tee /etc/hostname
	I0610 03:37:19.002983    7676 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-390000
	
	I0610 03:37:19.003110    7676 main.go:141] libmachine: Using SSH client type: native
	I0610 03:37:19.003297    7676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bf2980] 0x100bf51e0 <nil>  [] 0s} localhost 51286 <nil> <nil>}
	I0610 03:37:19.003309    7676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-390000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-390000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-390000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 03:37:19.074821    7676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 03:37:19.074834    7676 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19046-4812/.minikube CaCertPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19046-4812/.minikube}
	I0610 03:37:19.074844    7676 buildroot.go:174] setting up certificates
	I0610 03:37:19.074850    7676 provision.go:84] configureAuth start
	I0610 03:37:19.074855    7676 provision.go:143] copyHostCerts
	I0610 03:37:19.074938    7676 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-4812/.minikube/cert.pem, removing ...
	I0610 03:37:19.074951    7676 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-4812/.minikube/cert.pem
	I0610 03:37:19.075325    7676 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19046-4812/.minikube/cert.pem (1123 bytes)
	I0610 03:37:19.075533    7676 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-4812/.minikube/key.pem, removing ...
	I0610 03:37:19.075538    7676 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-4812/.minikube/key.pem
	I0610 03:37:19.075598    7676 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19046-4812/.minikube/key.pem (1675 bytes)
	I0610 03:37:19.075718    7676 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-4812/.minikube/ca.pem, removing ...
	I0610 03:37:19.075721    7676 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-4812/.minikube/ca.pem
	I0610 03:37:19.075775    7676 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19046-4812/.minikube/ca.pem (1078 bytes)
	I0610 03:37:19.075872    7676 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-390000 san=[127.0.0.1 localhost minikube stopped-upgrade-390000]
	I0610 03:37:19.127423    7676 provision.go:177] copyRemoteCerts
	I0610 03:37:19.127460    7676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 03:37:19.127470    7676 sshutil.go:53] new ssh client: &{IP:localhost Port:51286 SSHKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/stopped-upgrade-390000/id_rsa Username:docker}
	I0610 03:37:19.166640    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 03:37:19.174428    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 03:37:19.181414    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0610 03:37:19.188135    7676 provision.go:87] duration metric: took 113.274625ms to configureAuth
	I0610 03:37:19.188143    7676 buildroot.go:189] setting minikube options for container-runtime
	I0610 03:37:19.188263    7676 config.go:182] Loaded profile config "stopped-upgrade-390000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 03:37:19.188303    7676 main.go:141] libmachine: Using SSH client type: native
	I0610 03:37:19.188393    7676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bf2980] 0x100bf51e0 <nil>  [] 0s} localhost 51286 <nil> <nil>}
	I0610 03:37:19.188398    7676 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 03:37:19.256858    7676 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 03:37:19.256867    7676 buildroot.go:70] root file system type: tmpfs
	I0610 03:37:19.256914    7676 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 03:37:19.256961    7676 main.go:141] libmachine: Using SSH client type: native
	I0610 03:37:19.257077    7676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bf2980] 0x100bf51e0 <nil>  [] 0s} localhost 51286 <nil> <nil>}
	I0610 03:37:19.257111    7676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 03:37:19.328230    7676 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 03:37:19.328301    7676 main.go:141] libmachine: Using SSH client type: native
	I0610 03:37:19.328419    7676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bf2980] 0x100bf51e0 <nil>  [] 0s} localhost 51286 <nil> <nil>}
	I0610 03:37:19.328428    7676 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 03:37:19.688756    7676 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 03:37:19.688773    7676 machine.go:97] duration metric: took 873.547292ms to provisionDockerMachine
	I0610 03:37:19.688784    7676 start.go:293] postStartSetup for "stopped-upgrade-390000" (driver="qemu2")
	I0610 03:37:19.688790    7676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 03:37:19.688838    7676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 03:37:19.688850    7676 sshutil.go:53] new ssh client: &{IP:localhost Port:51286 SSHKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/stopped-upgrade-390000/id_rsa Username:docker}
	I0610 03:37:19.726737    7676 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 03:37:19.728119    7676 info.go:137] Remote host: Buildroot 2021.02.12
	I0610 03:37:19.728126    7676 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-4812/.minikube/addons for local assets ...
	I0610 03:37:19.728207    7676 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-4812/.minikube/files for local assets ...
	I0610 03:37:19.728327    7676 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19046-4812/.minikube/files/etc/ssl/certs/56872.pem -> 56872.pem in /etc/ssl/certs
	I0610 03:37:19.728449    7676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 03:37:19.730862    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/files/etc/ssl/certs/56872.pem --> /etc/ssl/certs/56872.pem (1708 bytes)
	I0610 03:37:19.738160    7676 start.go:296] duration metric: took 49.37225ms for postStartSetup
	I0610 03:37:19.738176    7676 fix.go:56] duration metric: took 21.299635583s for fixHost
	I0610 03:37:19.738212    7676 main.go:141] libmachine: Using SSH client type: native
	I0610 03:37:19.738314    7676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bf2980] 0x100bf51e0 <nil>  [] 0s} localhost 51286 <nil> <nil>}
	I0610 03:37:19.738322    7676 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 03:37:19.806538    7676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718015840.239770546
	
	I0610 03:37:19.806546    7676 fix.go:216] guest clock: 1718015840.239770546
	I0610 03:37:19.806550    7676 fix.go:229] Guest: 2024-06-10 03:37:20.239770546 -0700 PDT Remote: 2024-06-10 03:37:19.738177 -0700 PDT m=+21.421229334 (delta=501.593546ms)
	I0610 03:37:19.806561    7676 fix.go:200] guest clock delta is within tolerance: 501.593546ms
	I0610 03:37:19.806564    7676 start.go:83] releasing machines lock for "stopped-upgrade-390000", held for 21.368034541s
	I0610 03:37:19.806632    7676 ssh_runner.go:195] Run: cat /version.json
	I0610 03:37:19.806635    7676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 03:37:19.806640    7676 sshutil.go:53] new ssh client: &{IP:localhost Port:51286 SSHKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/stopped-upgrade-390000/id_rsa Username:docker}
	I0610 03:37:19.806651    7676 sshutil.go:53] new ssh client: &{IP:localhost Port:51286 SSHKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/stopped-upgrade-390000/id_rsa Username:docker}
	W0610 03:37:19.807172    7676 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51286: connect: connection refused
	I0610 03:37:19.807197    7676 retry.go:31] will retry after 187.508835ms: dial tcp [::1]:51286: connect: connection refused
	W0610 03:37:20.035054    7676 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0610 03:37:20.035119    7676 ssh_runner.go:195] Run: systemctl --version
	I0610 03:37:20.037003    7676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 03:37:20.038895    7676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 03:37:20.038934    7676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0610 03:37:20.042431    7676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0610 03:37:20.047793    7676 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 03:37:20.047807    7676 start.go:494] detecting cgroup driver to use...
	I0610 03:37:20.047884    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 03:37:20.055281    7676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0610 03:37:20.058782    7676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 03:37:20.062307    7676 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 03:37:20.062352    7676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 03:37:20.065917    7676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 03:37:20.069443    7676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 03:37:20.072698    7676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 03:37:20.075648    7676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 03:37:20.078616    7676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 03:37:20.082093    7676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 03:37:20.085520    7676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 03:37:20.088843    7676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 03:37:20.091573    7676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 03:37:20.095073    7676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 03:37:20.177117    7676 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 03:37:20.183820    7676 start.go:494] detecting cgroup driver to use...
	I0610 03:37:20.183892    7676 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 03:37:20.191301    7676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 03:37:20.196395    7676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 03:37:20.203128    7676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 03:37:20.208608    7676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 03:37:20.213560    7676 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 03:37:20.273299    7676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 03:37:20.278612    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 03:37:20.284481    7676 ssh_runner.go:195] Run: which cri-dockerd
	I0610 03:37:20.285888    7676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 03:37:20.289436    7676 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 03:37:20.295563    7676 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 03:37:20.374920    7676 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 03:37:20.452533    7676 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 03:37:20.452589    7676 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 03:37:20.457923    7676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 03:37:20.525077    7676 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 03:37:21.642941    7676 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.11786525s)
	I0610 03:37:21.643000    7676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 03:37:21.647488    7676 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0610 03:37:21.654275    7676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 03:37:21.658895    7676 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 03:37:21.736913    7676 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 03:37:21.805000    7676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 03:37:21.864116    7676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 03:37:21.870243    7676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 03:37:21.875170    7676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 03:37:21.957439    7676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 03:37:21.997911    7676 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 03:37:21.997994    7676 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 03:37:22.000063    7676 start.go:562] Will wait 60s for crictl version
	I0610 03:37:22.000116    7676 ssh_runner.go:195] Run: which crictl
	I0610 03:37:22.001655    7676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 03:37:22.015945    7676 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0610 03:37:22.016008    7676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 03:37:22.032467    7676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 03:37:19.970886    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:19.971014    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:37:19.985002    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:37:19.985075    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:37:19.996794    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:37:19.996860    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:37:20.008021    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:37:20.008092    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:37:20.019361    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:37:20.019433    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:37:20.029648    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:37:20.029716    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:37:20.041359    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:37:20.041421    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:37:20.052800    7510 logs.go:276] 0 containers: []
	W0610 03:37:20.052813    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:37:20.052864    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:37:20.064554    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:37:20.064570    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:37:20.064574    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:37:20.080618    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:37:20.080626    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:37:20.096181    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:37:20.096190    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:37:20.107289    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:37:20.107299    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:37:20.119068    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:37:20.119082    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:37:20.136959    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:37:20.136971    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:37:20.203416    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:37:20.203425    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:37:20.229820    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:37:20.229834    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:37:20.242300    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:37:20.242314    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:37:20.260099    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:37:20.260109    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:37:20.276116    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:37:20.276126    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:37:20.288917    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:37:20.288928    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:37:20.301959    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:37:20.301971    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:37:20.326118    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:37:20.326136    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:37:20.363344    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:37:20.363358    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:37:20.367985    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:37:20.367992    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:37:20.382759    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:37:20.382771    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:37:22.899040    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:22.060687    7676 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0610 03:37:22.060818    7676 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0610 03:37:22.062169    7676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 03:37:22.066270    7676 kubeadm.go:877] updating cluster {Name:stopped-upgrade-390000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51320 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-390000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0610 03:37:22.066313    7676 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0610 03:37:22.066356    7676 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 03:37:22.077244    7676 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 03:37:22.077252    7676 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0610 03:37:22.077298    7676 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 03:37:22.080437    7676 ssh_runner.go:195] Run: which lz4
	I0610 03:37:22.081661    7676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 03:37:22.082938    7676 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 03:37:22.082950    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0610 03:37:22.819524    7676 docker.go:649] duration metric: took 737.909583ms to copy over tarball
	I0610 03:37:22.819594    7676 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 03:37:24.015250    7676 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.195650625s)
	I0610 03:37:24.015269    7676 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 03:37:24.031224    7676 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 03:37:24.034557    7676 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0610 03:37:24.040005    7676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 03:37:24.107507    7676 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 03:37:25.792275    7676 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.684779292s)
	I0610 03:37:25.792368    7676 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 03:37:25.803715    7676 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 03:37:25.803725    7676 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0610 03:37:25.803730    7676 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0610 03:37:25.810138    7676 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 03:37:25.810168    7676 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0610 03:37:25.810215    7676 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0610 03:37:25.810294    7676 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0610 03:37:25.810361    7676 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 03:37:25.810413    7676 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0610 03:37:25.810456    7676 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0610 03:37:25.810650    7676 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 03:37:25.818929    7676 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0610 03:37:25.818993    7676 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0610 03:37:25.819051    7676 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0610 03:37:25.819119    7676 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0610 03:37:25.819156    7676 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 03:37:25.819726    7676 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 03:37:25.819750    7676 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0610 03:37:25.819783    7676 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 03:37:26.668879    7676 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0610 03:37:26.682238    7676 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0610 03:37:26.682267    7676 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0610 03:37:26.682329    7676 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0610 03:37:26.697272    7676 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0610 03:37:26.699439    7676 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0610 03:37:26.707498    7676 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0610 03:37:26.709555    7676 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0610 03:37:26.709572    7676 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0610 03:37:26.709611    7676 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0610 03:37:26.719589    7676 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0610 03:37:26.719610    7676 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0610 03:37:26.719666    7676 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0610 03:37:26.724614    7676 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 03:37:26.727133    7676 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0610 03:37:26.727249    7676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0610 03:37:26.730238    7676 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0610 03:37:26.730337    7676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0610 03:37:26.736630    7676 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0610 03:37:26.736636    7676 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0610 03:37:26.736649    7676 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 03:37:26.736661    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0610 03:37:26.736684    7676 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0610 03:37:26.736692    7676 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 03:37:26.736692    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0610 03:37:26.767423    7676 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0610 03:37:26.767439    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0610 03:37:26.767611    7676 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0610 03:37:26.832646    7676 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0610 03:37:26.833603    7676 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0610 03:37:26.834313    7676 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	W0610 03:37:26.841838    7676 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0610 03:37:26.841956    7676 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0610 03:37:26.863282    7676 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0610 03:37:26.863302    7676 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0610 03:37:26.863358    7676 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0610 03:37:26.867823    7676 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0610 03:37:26.867842    7676 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0610 03:37:26.867891    7676 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0610 03:37:26.897674    7676 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0610 03:37:26.897703    7676 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 03:37:26.897757    7676 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0610 03:37:26.911811    7676 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0610 03:37:26.918413    7676 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0610 03:37:26.966937    7676 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0610 03:37:26.967057    7676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0610 03:37:26.971178    7676 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0610 03:37:26.971208    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0610 03:37:26.994010    7676 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0610 03:37:26.994032    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0610 03:37:27.044821    7676 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0610 03:37:27.044933    7676 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 03:37:27.153581    7676 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0610 03:37:27.153606    7676 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0610 03:37:27.153627    7676 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 03:37:27.153640    7676 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0610 03:37:27.153677    7676 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 03:37:27.153679    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0610 03:37:27.195835    7676 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0610 03:37:27.195862    7676 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0610 03:37:27.195980    7676 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0610 03:37:27.197332    7676 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0610 03:37:27.197344    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0610 03:37:27.219858    7676 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0610 03:37:27.219872    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0610 03:37:27.496272    7676 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0610 03:37:27.496312    7676 cache_images.go:92] duration metric: took 1.692602333s to LoadCachedImages
	W0610 03:37:27.496366    7676 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0610 03:37:27.496372    7676 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0610 03:37:27.496426    7676 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-390000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-390000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 03:37:27.496497    7676 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 03:37:27.510063    7676 cni.go:84] Creating CNI manager for ""
	I0610 03:37:27.510076    7676 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:37:27.510084    7676 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 03:37:27.510092    7676 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-390000 NodeName:stopped-upgrade-390000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 03:37:27.510169    7676 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-390000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 03:37:27.510224    7676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0610 03:37:27.513735    7676 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 03:37:27.513764    7676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 03:37:27.516603    7676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0610 03:37:27.521418    7676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 03:37:27.526291    7676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0610 03:37:27.531635    7676 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0610 03:37:27.532716    7676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 03:37:27.536103    7676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 03:37:27.613253    7676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 03:37:27.620421    7676 certs.go:68] Setting up /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000 for IP: 10.0.2.15
	I0610 03:37:27.620432    7676 certs.go:194] generating shared ca certs ...
	I0610 03:37:27.620441    7676 certs.go:226] acquiring lock for ca certs: {Name:mk21a2158098c453d4ecfbaacf1fd5e5adc33d66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:37:27.620644    7676 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/ca.key
	I0610 03:37:27.620699    7676 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/proxy-client-ca.key
	I0610 03:37:27.620705    7676 certs.go:256] generating profile certs ...
	I0610 03:37:27.620771    7676 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/client.key
	I0610 03:37:27.620792    7676 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.key.3a4c56a7
	I0610 03:37:27.620802    7676 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.crt.3a4c56a7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0610 03:37:27.762362    7676 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.crt.3a4c56a7 ...
	I0610 03:37:27.762374    7676 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.crt.3a4c56a7: {Name:mka209fce3c1d7d58298def8d16d9dfa28e624d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:37:27.762610    7676 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.key.3a4c56a7 ...
	I0610 03:37:27.762616    7676 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.key.3a4c56a7: {Name:mk756e26fcf66ac152cef76d320a0821d848894c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:37:27.762740    7676 certs.go:381] copying /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.crt.3a4c56a7 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.crt
	I0610 03:37:27.762863    7676 certs.go:385] copying /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.key.3a4c56a7 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.key
	I0610 03:37:27.763024    7676 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/proxy-client.key
	I0610 03:37:27.763150    7676 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/5687.pem (1338 bytes)
	W0610 03:37:27.763189    7676 certs.go:480] ignoring /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/5687_empty.pem, impossibly tiny 0 bytes
	I0610 03:37:27.763194    7676 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 03:37:27.763212    7676 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem (1078 bytes)
	I0610 03:37:27.763229    7676 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem (1123 bytes)
	I0610 03:37:27.763245    7676 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/key.pem (1675 bytes)
	I0610 03:37:27.763281    7676 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/files/etc/ssl/certs/56872.pem (1708 bytes)
	I0610 03:37:27.763610    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 03:37:27.771006    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 03:37:27.777824    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 03:37:27.784543    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 03:37:27.791062    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0610 03:37:27.798185    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 03:37:27.805425    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 03:37:27.812253    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 03:37:27.818886    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/files/etc/ssl/certs/56872.pem --> /usr/share/ca-certificates/56872.pem (1708 bytes)
	I0610 03:37:27.825900    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 03:37:27.832640    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/5687.pem --> /usr/share/ca-certificates/5687.pem (1338 bytes)
	I0610 03:37:27.839200    7676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 03:37:27.844206    7676 ssh_runner.go:195] Run: openssl version
	I0610 03:37:27.846009    7676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/56872.pem && ln -fs /usr/share/ca-certificates/56872.pem /etc/ssl/certs/56872.pem"
	I0610 03:37:27.849541    7676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/56872.pem
	I0610 03:37:27.850917    7676 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:20 /usr/share/ca-certificates/56872.pem
	I0610 03:37:27.850935    7676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/56872.pem
	I0610 03:37:27.852788    7676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/56872.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 03:37:27.855486    7676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 03:37:27.858452    7676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 03:37:27.859803    7676 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:33 /usr/share/ca-certificates/minikubeCA.pem
	I0610 03:37:27.859825    7676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 03:37:27.861376    7676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 03:37:27.864102    7676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5687.pem && ln -fs /usr/share/ca-certificates/5687.pem /etc/ssl/certs/5687.pem"
	I0610 03:37:27.866967    7676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5687.pem
	I0610 03:37:27.868439    7676 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:20 /usr/share/ca-certificates/5687.pem
	I0610 03:37:27.868455    7676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5687.pem
	I0610 03:37:27.870082    7676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5687.pem /etc/ssl/certs/51391683.0"
	I0610 03:37:27.873388    7676 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 03:37:27.874716    7676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 03:37:27.876914    7676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 03:37:27.878784    7676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 03:37:27.880709    7676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 03:37:27.882603    7676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 03:37:27.884425    7676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 03:37:27.886242    7676 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-390000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51320 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-390000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0610 03:37:27.886313    7676 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 03:37:27.896813    7676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0610 03:37:27.900004    7676 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0610 03:37:27.900009    7676 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0610 03:37:27.900012    7676 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0610 03:37:27.900029    7676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0610 03:37:27.902838    7676 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0610 03:37:27.903130    7676 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-390000" does not appear in /Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:37:27.903232    7676 kubeconfig.go:62] /Users/jenkins/minikube-integration/19046-4812/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-390000" cluster setting kubeconfig missing "stopped-upgrade-390000" context setting]
	I0610 03:37:27.903411    7676 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/kubeconfig: {Name:mke25032b58aa44d6357ccc49c0a5254f131209e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:37:27.904700    7676 kapi.go:59] client config for stopped-upgrade-390000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/client.key", CAFile:"/Users/jenkins/minikube-integration/19046-4812/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101f80460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 03:37:27.905054    7676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0610 03:37:27.908001    7676 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-390000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0610 03:37:27.908008    7676 kubeadm.go:1154] stopping kube-system containers ...
	I0610 03:37:27.908056    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 03:37:27.918870    7676 docker.go:483] Stopping containers: [d5521dc872d7 bd2137ddded5 7e88f7ae5ad5 734fef33c2cb e6410a69bdaf 58bb62977b0b 7b9a20d5b4ac e73126707c04]
	I0610 03:37:27.918904    7676 ssh_runner.go:195] Run: docker stop d5521dc872d7 bd2137ddded5 7e88f7ae5ad5 734fef33c2cb e6410a69bdaf 58bb62977b0b 7b9a20d5b4ac e73126707c04
	I0610 03:37:27.930635    7676 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0610 03:37:27.936806    7676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 03:37:27.939731    7676 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 03:37:27.939739    7676 kubeadm.go:156] found existing configuration files:
	
	I0610 03:37:27.939777    7676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/admin.conf
	I0610 03:37:27.942547    7676 kubeadm.go:162] "https://control-plane.minikube.internal:51320" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 03:37:27.942585    7676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 03:37:27.946044    7676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/kubelet.conf
	I0610 03:37:27.949350    7676 kubeadm.go:162] "https://control-plane.minikube.internal:51320" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 03:37:27.949389    7676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 03:37:27.952360    7676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/controller-manager.conf
	I0610 03:37:27.955077    7676 kubeadm.go:162] "https://control-plane.minikube.internal:51320" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 03:37:27.955105    7676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 03:37:27.958254    7676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/scheduler.conf
	I0610 03:37:27.961320    7676 kubeadm.go:162] "https://control-plane.minikube.internal:51320" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 03:37:27.961358    7676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 03:37:27.964175    7676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 03:37:27.967205    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 03:37:27.991758    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 03:37:28.262820    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0610 03:37:27.899835    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:27.903731    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:37:27.918698    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:37:27.918771    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:37:27.932287    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:37:27.932345    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:37:27.943683    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:37:27.943738    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:37:27.955004    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:37:27.955075    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:37:27.966519    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:37:27.966581    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:37:27.982731    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:37:27.982796    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:37:27.993523    7510 logs.go:276] 0 containers: []
	W0610 03:37:27.993534    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:37:27.993594    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:37:28.004730    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:37:28.004759    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:37:28.004765    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:37:28.009398    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:37:28.009409    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:37:28.023649    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:37:28.023665    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:37:28.040985    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:37:28.041002    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:37:28.053144    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:37:28.053156    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:37:28.089616    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:37:28.089631    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:37:28.101882    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:37:28.101893    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:37:28.113809    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:37:28.113824    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:37:28.126316    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:37:28.126330    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:37:28.162167    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:37:28.162174    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:37:28.187995    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:37:28.188007    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:37:28.199607    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:37:28.199617    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:37:28.211162    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:37:28.211172    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:37:28.234439    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:37:28.234450    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:37:28.246526    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:37:28.246537    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:37:28.261227    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:37:28.261236    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:37:28.276784    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:37:28.276801    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:37:30.796174    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:28.396291    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 03:37:28.423505    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0610 03:37:28.458867    7676 api_server.go:52] waiting for apiserver process to appear ...
	I0610 03:37:28.458958    7676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 03:37:28.961036    7676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 03:37:29.460971    7676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 03:37:29.465274    7676 api_server.go:72] duration metric: took 1.006424583s to wait for apiserver process to appear ...
	I0610 03:37:29.465285    7676 api_server.go:88] waiting for apiserver healthz status ...
	I0610 03:37:29.465294    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:35.798639    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:35.798833    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:37:35.818790    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:37:35.818873    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:37:35.832460    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:37:35.832529    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:37:35.843788    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:37:35.843859    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:37:35.855703    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:37:35.855779    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:37:35.865960    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:37:35.866032    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:37:35.876745    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:37:35.876813    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:37:35.887046    7510 logs.go:276] 0 containers: []
	W0610 03:37:35.887057    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:37:35.887114    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:37:35.902679    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:37:35.902696    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:37:35.902701    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:37:35.916417    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:37:35.916428    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:37:35.941207    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:37:35.941218    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:37:35.955889    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:37:35.955900    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:37:35.967832    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:37:35.967844    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:37:35.983966    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:37:35.983977    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:37:36.002056    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:37:36.002067    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:37:36.017890    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:37:36.017900    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:37:36.053225    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:37:36.053236    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:37:36.064958    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:37:36.064968    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:37:36.079047    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:37:36.079059    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:37:36.090722    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:37:36.090732    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:37:36.113590    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:37:36.113597    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:37:36.150527    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:37:36.150535    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:37:36.162006    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:37:36.162019    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:37:36.167692    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:37:36.167701    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:37:36.178820    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:37:36.178834    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:37:34.467330    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:34.467379    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:38.697826    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:39.467604    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:39.467633    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:43.700001    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:43.700173    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:37:43.714820    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:37:43.714899    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:37:43.726196    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:37:43.726271    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:37:43.736841    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:37:43.736915    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:37:43.747618    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:37:43.747690    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:37:43.758236    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:37:43.758306    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:37:43.768879    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:37:43.768948    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:37:43.779150    7510 logs.go:276] 0 containers: []
	W0610 03:37:43.779161    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:37:43.779220    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:37:43.789504    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:37:43.789522    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:37:43.789528    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:37:43.801330    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:37:43.801342    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:37:43.813150    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:37:43.813161    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:37:43.824841    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:37:43.824857    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:37:43.836437    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:37:43.836473    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:37:43.848184    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:37:43.848196    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:37:43.865895    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:37:43.865909    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:37:43.881565    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:37:43.881579    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:37:43.902804    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:37:43.902814    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:37:43.916135    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:37:43.916147    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:37:43.927990    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:37:43.928011    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:37:43.951186    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:37:43.951200    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:37:43.975740    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:37:43.975752    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:37:43.990207    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:37:43.990219    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:37:44.001255    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:37:44.001268    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:37:44.036857    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:37:44.036866    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:37:44.041032    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:37:44.041042    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:37:46.577448    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:44.468011    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:44.468047    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:51.579562    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:51.579828    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:37:51.594892    7510 logs.go:276] 2 containers: [dfccadff942b 824abf5cac7a]
	I0610 03:37:51.594969    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:37:51.610810    7510 logs.go:276] 2 containers: [8dc1ae08dbc9 69f1a3133c99]
	I0610 03:37:51.610885    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:37:51.621987    7510 logs.go:276] 1 containers: [f602fc8d964f]
	I0610 03:37:51.622054    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:37:51.633033    7510 logs.go:276] 2 containers: [9ef066576c19 def37e7dd307]
	I0610 03:37:51.633100    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:37:51.646128    7510 logs.go:276] 1 containers: [867363cf166a]
	I0610 03:37:51.646194    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:37:51.656552    7510 logs.go:276] 2 containers: [88a800a47063 a3c577c83424]
	I0610 03:37:51.656614    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:37:51.666749    7510 logs.go:276] 0 containers: []
	W0610 03:37:51.666761    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:37:51.666821    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:37:51.677773    7510 logs.go:276] 2 containers: [4afeda2760b7 789b269e164e]
	I0610 03:37:51.677791    7510 logs.go:123] Gathering logs for kube-controller-manager [88a800a47063] ...
	I0610 03:37:51.677797    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88a800a47063"
	I0610 03:37:51.696437    7510 logs.go:123] Gathering logs for storage-provisioner [4afeda2760b7] ...
	I0610 03:37:51.696450    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4afeda2760b7"
	I0610 03:37:51.707798    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:37:51.707808    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:37:51.744663    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:37:51.744679    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:37:51.782733    7510 logs.go:123] Gathering logs for kube-apiserver [824abf5cac7a] ...
	I0610 03:37:51.782746    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824abf5cac7a"
	I0610 03:37:51.807168    7510 logs.go:123] Gathering logs for coredns [f602fc8d964f] ...
	I0610 03:37:51.807179    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f602fc8d964f"
	I0610 03:37:51.818918    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:37:51.818930    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:37:51.842066    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:37:51.842081    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:37:51.853996    7510 logs.go:123] Gathering logs for etcd [8dc1ae08dbc9] ...
	I0610 03:37:51.854007    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dc1ae08dbc9"
	I0610 03:37:51.867836    7510 logs.go:123] Gathering logs for kube-scheduler [9ef066576c19] ...
	I0610 03:37:51.867848    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ef066576c19"
	I0610 03:37:51.880093    7510 logs.go:123] Gathering logs for kube-scheduler [def37e7dd307] ...
	I0610 03:37:51.880104    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 def37e7dd307"
	I0610 03:37:51.896586    7510 logs.go:123] Gathering logs for kube-proxy [867363cf166a] ...
	I0610 03:37:51.896596    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 867363cf166a"
	I0610 03:37:51.908179    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:37:51.908188    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:37:51.913046    7510 logs.go:123] Gathering logs for kube-apiserver [dfccadff942b] ...
	I0610 03:37:51.913052    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dfccadff942b"
	I0610 03:37:51.931125    7510 logs.go:123] Gathering logs for etcd [69f1a3133c99] ...
	I0610 03:37:51.931136    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 69f1a3133c99"
	I0610 03:37:51.945543    7510 logs.go:123] Gathering logs for kube-controller-manager [a3c577c83424] ...
	I0610 03:37:51.945557    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3c577c83424"
	I0610 03:37:51.957137    7510 logs.go:123] Gathering logs for storage-provisioner [789b269e164e] ...
	I0610 03:37:51.957146    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 789b269e164e"
	I0610 03:37:49.468477    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:49.468554    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:54.469950    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:54.469477    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:54.469526    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:59.471044    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:59.471089    7510 kubeadm.go:591] duration metric: took 4m4.218128208s to restartPrimaryControlPlane
	W0610 03:37:59.471143    7510 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0610 03:37:59.471167    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0610 03:38:00.447631    7510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 03:38:00.452436    7510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 03:38:00.455086    7510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 03:38:00.457838    7510 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 03:38:00.457847    7510 kubeadm.go:156] found existing configuration files:
	
	I0610 03:38:00.457868    7510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/admin.conf
	I0610 03:38:00.460806    7510 kubeadm.go:162] "https://control-plane.minikube.internal:51096" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 03:38:00.460827    7510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 03:38:00.463635    7510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/kubelet.conf
	I0610 03:38:00.465996    7510 kubeadm.go:162] "https://control-plane.minikube.internal:51096" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 03:38:00.466019    7510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 03:38:00.468975    7510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/controller-manager.conf
	I0610 03:38:00.471750    7510 kubeadm.go:162] "https://control-plane.minikube.internal:51096" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 03:38:00.471772    7510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 03:38:00.474199    7510 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/scheduler.conf
	I0610 03:38:00.477106    7510 kubeadm.go:162] "https://control-plane.minikube.internal:51096" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51096 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 03:38:00.477125    7510 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 03:38:00.479881    7510 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 03:38:00.497140    7510 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0610 03:38:00.497210    7510 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 03:38:00.549423    7510 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 03:38:00.549476    7510 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 03:38:00.549526    7510 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 03:38:00.599716    7510 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 03:38:00.603949    7510 out.go:204]   - Generating certificates and keys ...
	I0610 03:38:00.603984    7510 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 03:38:00.604015    7510 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 03:38:00.604125    7510 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 03:38:00.604250    7510 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 03:38:00.604381    7510 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 03:38:00.604414    7510 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 03:38:00.604455    7510 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 03:38:00.604537    7510 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 03:38:00.604604    7510 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 03:38:00.604653    7510 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 03:38:00.604716    7510 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 03:38:00.604752    7510 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 03:38:00.761174    7510 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 03:38:00.831120    7510 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 03:38:01.056168    7510 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 03:38:01.126354    7510 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 03:38:01.155950    7510 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 03:38:01.156252    7510 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 03:38:01.156296    7510 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 03:38:01.256765    7510 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 03:38:01.259914    7510 out.go:204]   - Booting up control plane ...
	I0610 03:38:01.259958    7510 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 03:38:01.259993    7510 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 03:38:01.260021    7510 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 03:38:01.260086    7510 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 03:38:01.263561    7510 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 03:37:59.470899    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:59.470961    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:06.265506    7510 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.001727 seconds
	I0610 03:38:06.265565    7510 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 03:38:06.269150    7510 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 03:38:06.790809    7510 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 03:38:06.791289    7510 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-479000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 03:38:07.303059    7510 kubeadm.go:309] [bootstrap-token] Using token: ugvriq.mt0pdmq79dand9fb
	I0610 03:38:07.307564    7510 out.go:204]   - Configuring RBAC rules ...
	I0610 03:38:07.307633    7510 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 03:38:07.309688    7510 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 03:38:07.317446    7510 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 03:38:07.318281    7510 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 03:38:07.319099    7510 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 03:38:07.320033    7510 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 03:38:07.323115    7510 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 03:38:07.501102    7510 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 03:38:07.711389    7510 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 03:38:07.711758    7510 kubeadm.go:309] 
	I0610 03:38:07.711790    7510 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 03:38:07.711794    7510 kubeadm.go:309] 
	I0610 03:38:07.711847    7510 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 03:38:07.711853    7510 kubeadm.go:309] 
	I0610 03:38:07.711867    7510 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 03:38:07.711894    7510 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 03:38:07.711924    7510 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 03:38:07.711931    7510 kubeadm.go:309] 
	I0610 03:38:07.711966    7510 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 03:38:07.711970    7510 kubeadm.go:309] 
	I0610 03:38:07.711992    7510 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 03:38:07.711995    7510 kubeadm.go:309] 
	I0610 03:38:07.712018    7510 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 03:38:07.712064    7510 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 03:38:07.712107    7510 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 03:38:07.712110    7510 kubeadm.go:309] 
	I0610 03:38:07.712146    7510 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 03:38:07.712177    7510 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 03:38:07.712179    7510 kubeadm.go:309] 
	I0610 03:38:07.712214    7510 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ugvriq.mt0pdmq79dand9fb \
	I0610 03:38:07.712257    7510 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f7fe6ae71b856fd6b6179c41fff2157e8fd728e5d925a1fc919a0499149ebdbb \
	I0610 03:38:07.712267    7510 kubeadm.go:309] 	--control-plane 
	I0610 03:38:07.712271    7510 kubeadm.go:309] 
	I0610 03:38:07.712308    7510 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 03:38:07.712312    7510 kubeadm.go:309] 
	I0610 03:38:07.712348    7510 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ugvriq.mt0pdmq79dand9fb \
	I0610 03:38:07.712408    7510 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f7fe6ae71b856fd6b6179c41fff2157e8fd728e5d925a1fc919a0499149ebdbb 
	I0610 03:38:07.712464    7510 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 03:38:07.712471    7510 cni.go:84] Creating CNI manager for ""
	I0610 03:38:07.712479    7510 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:38:07.716717    7510 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 03:38:07.723735    7510 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 03:38:07.726593    7510 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 03:38:07.733053    7510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 03:38:07.733105    7510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 03:38:07.733105    7510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-479000 minikube.k8s.io/updated_at=2024_06_10T03_38_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=running-upgrade-479000 minikube.k8s.io/primary=true
	I0610 03:38:07.778341    7510 ops.go:34] apiserver oom_adj: -16
	I0610 03:38:07.778444    7510 kubeadm.go:1107] duration metric: took 45.388333ms to wait for elevateKubeSystemPrivileges
	W0610 03:38:07.778463    7510 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 03:38:07.778467    7510 kubeadm.go:393] duration metric: took 4m12.540072625s to StartCluster
	I0610 03:38:07.778476    7510 settings.go:142] acquiring lock: {Name:mke35f292ed93eff7117a159773dd0e114b7dd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:38:07.778631    7510 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:38:07.779011    7510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/kubeconfig: {Name:mke25032b58aa44d6357ccc49c0a5254f131209e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:38:07.779213    7510 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:38:07.782817    7510 out.go:177] * Verifying Kubernetes components...
	I0610 03:38:07.779239    7510 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 03:38:07.779329    7510 config.go:182] Loaded profile config "running-upgrade-479000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 03:38:07.789755    7510 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-479000"
	I0610 03:38:07.789767    7510 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-479000"
	I0610 03:38:07.789769    7510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0610 03:38:07.789770    7510 addons.go:243] addon storage-provisioner should already be in state true
	I0610 03:38:07.789800    7510 host.go:66] Checking if "running-upgrade-479000" exists ...
	I0610 03:38:07.789769    7510 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-479000"
	I0610 03:38:07.789824    7510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-479000"
	I0610 03:38:07.790832    7510 kapi.go:59] client config for running-upgrade-479000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/running-upgrade-479000/client.key", CAFile:"/Users/jenkins/minikube-integration/19046-4812/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104358460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 03:38:07.790982    7510 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-479000"
	W0610 03:38:07.790987    7510 addons.go:243] addon default-storageclass should already be in state true
	I0610 03:38:07.791000    7510 host.go:66] Checking if "running-upgrade-479000" exists ...
	I0610 03:38:07.795567    7510 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 03:38:07.799702    7510 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 03:38:07.799707    7510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 03:38:07.799714    7510 sshutil.go:53] new ssh client: &{IP:localhost Port:51064 SSHKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/running-upgrade-479000/id_rsa Username:docker}
	I0610 03:38:07.800329    7510 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 03:38:07.800335    7510 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 03:38:07.800339    7510 sshutil.go:53] new ssh client: &{IP:localhost Port:51064 SSHKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/running-upgrade-479000/id_rsa Username:docker}
	I0610 03:38:07.880363    7510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 03:38:07.885093    7510 api_server.go:52] waiting for apiserver process to appear ...
	I0610 03:38:07.885139    7510 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 03:38:07.889515    7510 api_server.go:72] duration metric: took 110.291916ms to wait for apiserver process to appear ...
	I0610 03:38:07.889524    7510 api_server.go:88] waiting for apiserver healthz status ...
	I0610 03:38:07.889531    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:04.472207    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:04.472229    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:07.903441    7510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 03:38:07.907958    7510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 03:38:12.890121    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:12.890141    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:09.472821    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:09.472846    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:17.891444    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:17.891484    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:14.473808    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:14.473849    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:22.891933    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:22.891967    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:19.475890    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:19.475926    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:27.892273    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:27.892295    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:24.478115    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:24.478158    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:32.892917    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:32.892936    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:29.480359    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:29.480496    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:38:29.494291    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:38:29.494369    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:38:29.506666    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:38:29.506739    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:38:29.517069    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:38:29.517134    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:38:29.527426    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:38:29.527517    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:38:29.538103    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:38:29.538176    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:38:29.548615    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:38:29.548691    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:38:29.558711    7676 logs.go:276] 0 containers: []
	W0610 03:38:29.558722    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:38:29.558782    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:38:29.569173    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:38:29.569194    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:38:29.569201    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:38:29.606728    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:38:29.606739    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:38:29.647905    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:38:29.647917    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:38:29.660115    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:38:29.660127    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:38:29.676576    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:38:29.676588    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:38:29.776605    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:38:29.776616    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:38:29.789647    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:38:29.789663    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:38:29.807201    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:38:29.807217    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:38:29.819099    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:38:29.819113    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:38:29.831184    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:38:29.831196    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:38:29.846764    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:38:29.846774    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:38:29.858161    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:38:29.858172    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:38:29.869258    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:38:29.869270    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:38:29.894086    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:38:29.894094    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:38:29.898090    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:38:29.898099    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:38:29.911856    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:38:29.911865    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:38:29.925533    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:38:29.925544    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:38:32.441985    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:37.893488    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:37.893523    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0610 03:38:38.246907    7510 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0610 03:38:38.251721    7510 out.go:177] * Enabled addons: storage-provisioner
	I0610 03:38:37.444184    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:37.444406    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:38:37.464033    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:38:37.464122    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:38:37.476576    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:38:37.476647    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:38:37.488137    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:38:37.488204    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:38:37.498740    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:38:37.498824    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:38:37.509276    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:38:37.509345    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:38:37.519491    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:38:37.519559    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:38:37.529521    7676 logs.go:276] 0 containers: []
	W0610 03:38:37.529532    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:38:37.529590    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:38:37.540119    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:38:37.540136    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:38:37.540143    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:38:37.578286    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:38:37.578297    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:38:37.596741    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:38:37.596751    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:38:37.608549    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:38:37.608561    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:38:37.648429    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:38:37.648441    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:38:37.660106    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:38:37.660117    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:38:37.671724    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:38:37.671736    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:38:37.689253    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:38:37.689265    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:38:37.714065    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:38:37.714078    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:38:37.725975    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:38:37.725989    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:38:37.741568    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:38:37.741581    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:38:37.753174    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:38:37.753184    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:38:37.791625    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:38:37.791633    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:38:37.795811    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:38:37.795819    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:38:37.810103    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:38:37.810116    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:38:37.824658    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:38:37.824668    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:38:37.838747    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:38:37.838758    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:38:38.258621    7510 addons.go:510] duration metric: took 30.479881584s for enable addons: enabled=[storage-provisioner]
	I0610 03:38:42.894607    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:42.894647    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:40.351011    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:47.895558    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:47.895580    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:45.353228    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:45.353341    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:38:45.365454    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:38:45.365527    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:38:45.378291    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:38:45.378399    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:38:45.389780    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:38:45.389857    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:38:45.400481    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:38:45.400554    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:38:45.410747    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:38:45.410816    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:38:45.421098    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:38:45.421163    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:38:45.432558    7676 logs.go:276] 0 containers: []
	W0610 03:38:45.432570    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:38:45.432627    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:38:45.443014    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:38:45.443036    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:38:45.443042    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:38:45.480103    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:38:45.480116    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:38:45.494881    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:38:45.494892    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:38:45.509644    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:38:45.509655    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:38:45.521226    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:38:45.521236    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:38:45.532433    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:38:45.532443    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:38:45.546163    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:38:45.546173    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:38:45.571261    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:38:45.571272    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:38:45.583313    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:38:45.583325    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:38:45.597698    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:38:45.597708    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:38:45.635543    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:38:45.635554    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:38:45.650301    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:38:45.650315    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:38:45.667375    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:38:45.667386    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:38:45.671615    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:38:45.671621    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:38:45.706938    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:38:45.706948    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:38:45.718594    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:38:45.718607    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:38:45.733687    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:38:45.733697    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:38:48.247129    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:52.896904    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:52.896936    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:53.249375    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:53.249490    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:38:53.260123    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:38:53.260191    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:38:53.270604    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:38:53.270674    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:38:53.281103    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:38:53.281170    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:38:53.291232    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:38:53.291302    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:38:53.301957    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:38:53.302022    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:38:53.319734    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:38:53.319795    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:38:53.329748    7676 logs.go:276] 0 containers: []
	W0610 03:38:53.329759    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:38:53.329814    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:38:53.340625    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:38:53.340645    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:38:53.340651    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:38:53.351653    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:38:53.356420    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:38:53.395344    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:38:53.395359    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:38:53.414865    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:38:53.414876    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:38:53.451481    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:38:53.451491    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:38:53.458116    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:38:53.458124    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:38:53.472099    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:38:53.472110    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:38:53.487025    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:38:53.487035    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:38:53.498921    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:38:53.498930    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:38:53.517186    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:38:53.517195    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:38:53.556103    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:38:53.556113    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:38:53.567421    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:38:53.567432    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:38:53.579258    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:38:53.579271    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:38:53.593528    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:38:53.593539    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:38:53.618870    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:38:53.618880    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:38:53.631260    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:38:53.631276    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:38:53.645261    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:38:53.645273    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:38:56.161565    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:57.898728    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:57.903080    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:01.163574    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:01.163809    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:01.182389    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:39:01.182485    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:01.197300    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:39:01.197376    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:01.209282    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:39:01.209342    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:01.219557    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:39:01.219625    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:01.233361    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:39:01.233427    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:01.243741    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:39:01.243807    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:01.253509    7676 logs.go:276] 0 containers: []
	W0610 03:39:01.253523    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:01.253576    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:01.264145    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:39:01.264166    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:01.264172    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:01.299370    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:39:01.299383    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:39:01.313856    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:39:01.313867    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:39:01.329466    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:39:01.329480    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:39:01.340373    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:39:01.340385    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:39:01.354860    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:01.354874    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:01.379658    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:39:01.379669    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:01.391842    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:39:01.391853    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:39:01.410409    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:01.410421    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:01.450832    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:01.450853    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:01.456418    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:39:01.456429    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:39:01.471426    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:39:01.471437    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:39:01.511156    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:39:01.511168    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:39:01.526652    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:39:01.526665    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:39:01.539223    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:39:01.539235    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:39:01.553689    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:39:01.553702    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:39:01.566683    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:39:01.566696    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:39:02.904116    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:02.905081    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:04.084127    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:07.906941    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:07.907050    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:07.924247    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:39:07.924318    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:07.935741    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:39:07.935813    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:07.946376    7510 logs.go:276] 2 containers: [89f9105df8f8 5fc29143e075]
	I0610 03:39:07.946439    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:07.957044    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:39:07.957111    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:07.967207    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:39:07.967274    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:07.979457    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:39:07.979525    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:07.990079    7510 logs.go:276] 0 containers: []
	W0610 03:39:07.990093    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:07.990155    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:08.000199    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:39:08.000215    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:39:08.000221    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:39:08.017062    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:39:08.017071    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:39:08.028319    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:08.028329    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:08.060756    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:08.060764    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:08.097283    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:39:08.097295    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:39:08.112492    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:39:08.112502    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:39:08.126938    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:39:08.126954    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:39:08.141198    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:39:08.141211    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:08.153471    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:08.153481    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:08.158068    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:39:08.158073    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:39:08.170159    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:39:08.170174    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:39:08.181644    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:39:08.181653    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:39:08.197066    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:08.197076    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:10.723591    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:09.086408    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:09.086571    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:09.098447    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:39:09.098522    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:09.109436    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:39:09.109506    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:09.119692    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:39:09.119765    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:09.130636    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:39:09.130717    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:09.143711    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:39:09.143776    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:09.154305    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:39:09.154375    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:09.164293    7676 logs.go:276] 0 containers: []
	W0610 03:39:09.164303    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:09.164363    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:09.174724    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:39:09.174744    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:39:09.174749    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:39:09.188984    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:39:09.188995    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:39:09.202176    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:09.202187    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:09.206868    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:39:09.206873    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:39:09.220884    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:39:09.220896    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:39:09.235101    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:39:09.235111    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:39:09.246246    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:39:09.246258    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:09.258291    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:09.258303    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:09.296111    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:09.296120    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:09.332420    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:39:09.332430    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:39:09.347276    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:09.347293    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:09.372190    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:39:09.372198    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:39:09.410539    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:39:09.410553    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:39:09.425309    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:39:09.425320    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:39:09.443063    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:39:09.443075    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:39:09.456414    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:39:09.456430    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:39:09.470876    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:39:09.470886    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:39:11.987701    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:15.726003    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:15.726177    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:15.737180    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:39:15.737252    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:15.747196    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:39:15.747262    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:15.757821    7510 logs.go:276] 2 containers: [89f9105df8f8 5fc29143e075]
	I0610 03:39:15.757886    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:15.768558    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:39:15.768623    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:15.778995    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:39:15.779070    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:15.789352    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:39:15.789423    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:15.799782    7510 logs.go:276] 0 containers: []
	W0610 03:39:15.799794    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:15.799855    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:15.810089    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:39:15.810104    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:15.810109    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:15.844500    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:15.844509    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:15.849045    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:39:15.849054    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:39:15.864540    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:39:15.864550    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:39:15.875723    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:39:15.875733    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:39:15.893320    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:39:15.893331    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:39:15.905336    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:15.905348    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:15.929927    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:39:15.929935    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:15.941204    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:15.941216    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:15.976459    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:39:15.976474    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:39:15.998719    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:39:15.998730    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:39:16.015677    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:39:16.015687    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:39:16.030308    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:39:16.030316    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:39:16.990132    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:16.990380    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:17.024375    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:39:17.024477    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:17.040629    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:39:17.040711    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:17.053516    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:39:17.053587    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:17.064709    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:39:17.064783    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:17.075103    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:39:17.075170    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:17.085590    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:39:17.085657    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:17.096341    7676 logs.go:276] 0 containers: []
	W0610 03:39:17.096354    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:17.096417    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:17.106803    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:39:17.106820    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:39:17.106826    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:39:17.119093    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:39:17.119105    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:39:17.133767    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:39:17.133778    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:39:17.145128    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:17.145137    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:17.149771    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:39:17.149778    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:39:17.163844    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:39:17.163855    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:39:17.175385    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:39:17.175396    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:39:17.195873    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:39:17.195883    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:39:17.217562    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:17.217572    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:17.241535    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:17.241544    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:17.278935    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:39:17.278945    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:39:17.316471    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:39:17.316482    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:39:17.331147    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:39:17.331157    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:39:17.343308    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:17.343321    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:17.377483    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:39:17.377493    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:39:17.391464    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:39:17.391475    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:39:17.405652    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:39:17.405664    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:18.549918    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:19.919852    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:23.552545    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:23.552722    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:23.576583    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:39:23.576659    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:23.590405    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:39:23.590483    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:23.601668    7510 logs.go:276] 2 containers: [89f9105df8f8 5fc29143e075]
	I0610 03:39:23.601732    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:23.612292    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:39:23.612363    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:23.622495    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:39:23.622565    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:23.634041    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:39:23.634106    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:23.644352    7510 logs.go:276] 0 containers: []
	W0610 03:39:23.644364    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:23.644422    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:23.654944    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:39:23.654959    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:23.654965    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:23.689291    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:23.689301    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:23.693947    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:23.693956    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:23.730005    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:39:23.730015    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:39:23.744611    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:39:23.744622    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:39:23.756905    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:39:23.756917    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:39:23.769058    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:39:23.769073    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:39:23.786658    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:39:23.786673    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:39:23.802210    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:39:23.802225    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:39:23.813838    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:39:23.813853    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:39:23.831372    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:39:23.831382    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:39:23.843339    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:23.843350    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:23.866144    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:39:23.866152    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:26.379477    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:24.922557    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:24.922773    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:24.938916    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:39:24.939001    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:24.951023    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:39:24.951102    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:24.961278    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:39:24.961344    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:24.972175    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:39:24.972249    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:24.982986    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:39:24.983051    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:24.993727    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:39:24.993790    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:25.003926    7676 logs.go:276] 0 containers: []
	W0610 03:39:25.003937    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:25.003995    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:25.024105    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:39:25.024125    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:39:25.024130    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:39:25.063145    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:39:25.063157    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:39:25.075184    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:25.075197    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:25.110597    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:39:25.110608    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:39:25.125168    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:39:25.125182    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:39:25.140239    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:25.140253    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:25.177828    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:39:25.177836    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:39:25.192028    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:39:25.192039    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:39:25.208912    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:39:25.208923    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:39:25.226288    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:39:25.226298    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:39:25.238267    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:39:25.238281    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:25.250010    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:25.250022    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:25.254457    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:39:25.254468    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:39:25.265850    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:39:25.265861    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:39:25.277761    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:39:25.277772    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:39:25.291544    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:39:25.291554    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:39:25.304003    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:25.304013    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:27.830403    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:31.381876    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:31.382325    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:31.421046    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:39:31.421207    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:31.443877    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:39:31.443994    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:31.460157    7510 logs.go:276] 2 containers: [89f9105df8f8 5fc29143e075]
	I0610 03:39:31.460238    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:31.473858    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:39:31.473917    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:31.484584    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:39:31.484662    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:31.496145    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:39:31.496208    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:31.512974    7510 logs.go:276] 0 containers: []
	W0610 03:39:31.512992    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:31.513042    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:31.523459    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:39:31.523474    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:39:31.523479    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:31.534894    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:31.534905    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:31.570452    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:31.570468    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:31.610013    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:39:31.610026    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:39:31.624237    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:39:31.624254    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:39:31.636373    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:39:31.636387    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:39:31.651186    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:39:31.651197    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:39:31.667345    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:31.667356    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:31.691580    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:31.691589    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:31.696158    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:39:31.696165    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:39:31.710491    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:39:31.710501    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:39:31.721739    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:39:31.721750    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:39:31.733629    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:39:31.733639    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:39:32.832477    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:32.832570    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:32.843833    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:39:32.843907    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:32.856216    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:39:32.856293    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:32.866330    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:39:32.866397    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:32.877064    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:39:32.877145    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:32.887758    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:39:32.887827    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:32.898524    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:39:32.898591    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:32.908587    7676 logs.go:276] 0 containers: []
	W0610 03:39:32.908600    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:32.908662    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:32.919321    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:39:32.919337    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:39:32.919343    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:32.931104    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:39:32.931116    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:39:32.942725    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:32.942736    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:32.947060    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:32.947067    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:32.982443    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:39:32.982454    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:39:32.996923    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:39:32.996934    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:39:33.034763    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:39:33.034774    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:39:33.052841    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:39:33.052852    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:39:33.064130    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:39:33.064144    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:39:33.078842    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:39:33.078859    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:39:33.090352    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:33.090363    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:33.129498    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:39:33.129514    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:39:33.144198    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:39:33.144211    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:39:33.161919    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:33.161929    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:33.186520    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:39:33.186533    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:39:33.198115    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:39:33.198128    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:39:33.210296    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:39:33.210311    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:39:34.251818    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:35.726234    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:39.354512    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:39.354655    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:39.368283    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:39:39.368362    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:39.379135    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:39:39.379204    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:39.390025    7510 logs.go:276] 2 containers: [89f9105df8f8 5fc29143e075]
	I0610 03:39:39.390094    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:39.400494    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:39:39.400565    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:39.410487    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:39:39.410566    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:39.421573    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:39:39.421641    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:39.432858    7510 logs.go:276] 0 containers: []
	W0610 03:39:39.432870    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:39.432921    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:39.443263    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:39:39.443280    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:39:39.443285    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:39:39.455142    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:39:39.455152    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:39:39.469782    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:39:39.469790    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:39:39.490530    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:39:39.490541    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:39:39.502123    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:39.502131    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:39.536508    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:39.536516    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:39.571464    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:39:39.571473    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:39:39.583324    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:39:39.583334    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:39:39.594852    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:39.594862    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:39.617734    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:39:39.617742    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:39.629044    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:39.629053    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:39.633792    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:39:39.633799    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:39:39.647779    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:39:39.647790    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:39:42.163552    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:40.828983    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:40.829324    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:40.865912    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:39:40.866036    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:40.883506    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:39:40.883583    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:40.898794    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:39:40.898877    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:40.910104    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:39:40.910181    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:40.920967    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:39:40.921040    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:40.931776    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:39:40.931851    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:40.942241    7676 logs.go:276] 0 containers: []
	W0610 03:39:40.942253    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:40.942322    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:40.952704    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:39:40.952721    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:39:40.952727    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:39:40.966305    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:39:40.966317    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:39:40.977936    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:39:40.977947    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:39:40.989328    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:39:40.989341    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:39:41.011108    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:39:41.011119    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:39:41.026375    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:39:41.026388    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:39:41.040524    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:39:41.040535    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:39:41.054280    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:39:41.054289    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:39:41.092104    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:39:41.092114    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:39:41.108681    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:39:41.108692    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:39:41.122993    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:41.123003    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:41.161530    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:41.161542    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:41.166233    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:39:41.166239    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:41.178458    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:41.178470    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:41.216375    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:39:41.216387    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:39:41.228432    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:39:41.228442    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:39:41.242856    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:41.242867    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:47.165939    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:47.166224    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:47.191563    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:39:47.191663    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:47.208082    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:39:47.208155    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:47.221494    7510 logs.go:276] 2 containers: [89f9105df8f8 5fc29143e075]
	I0610 03:39:47.221567    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:47.233274    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:39:47.233346    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:47.243814    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:39:47.243885    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:47.254391    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:39:47.254455    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:47.264997    7510 logs.go:276] 0 containers: []
	W0610 03:39:47.265010    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:47.265056    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:47.275382    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:39:47.275400    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:47.275405    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:47.307875    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:47.307884    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:47.315185    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:47.315193    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:47.353532    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:39:47.353546    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:39:47.365627    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:39:47.365638    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:39:47.377770    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:39:47.377781    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:39:47.395511    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:47.395522    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:47.419875    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:39:47.419884    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:39:47.434007    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:39:47.434016    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:39:47.448197    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:39:47.448206    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:39:47.463307    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:39:47.463318    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:39:47.480902    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:39:47.480913    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:39:47.492418    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:39:47.492429    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:43.768577    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:50.006131    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:48.770970    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:48.771231    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:48.795497    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:39:48.795606    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:48.811859    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:39:48.811927    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:48.824341    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:39:48.824407    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:48.835669    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:39:48.835748    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:48.846751    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:39:48.846817    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:48.862442    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:39:48.862518    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:48.872859    7676 logs.go:276] 0 containers: []
	W0610 03:39:48.872870    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:48.872926    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:48.883306    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:39:48.883323    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:48.883327    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:48.922222    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:39:48.922236    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:39:48.937422    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:39:48.937434    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:39:48.952229    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:39:48.952241    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:39:48.969904    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:39:48.969917    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:39:48.983917    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:39:48.983930    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:39:48.999524    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:39:48.999537    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:39:49.013475    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:39:49.013485    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:39:49.024750    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:49.024759    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:49.060251    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:39:49.060263    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:39:49.074342    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:49.074355    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:49.097214    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:39:49.097222    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:49.108614    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:49.108625    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:49.112533    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:39:49.112543    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:39:49.125793    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:39:49.125803    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:39:49.163068    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:39:49.163081    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:39:49.174029    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:39:49.174040    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:39:51.687366    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:55.008249    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:55.008481    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:55.030224    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:39:55.030322    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:55.045127    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:39:55.045204    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:55.057775    7510 logs.go:276] 2 containers: [89f9105df8f8 5fc29143e075]
	I0610 03:39:55.057856    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:55.068252    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:39:55.068320    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:55.078613    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:39:55.078682    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:55.089220    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:39:55.089289    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:55.099649    7510 logs.go:276] 0 containers: []
	W0610 03:39:55.099659    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:55.099720    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:55.110209    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:39:55.110224    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:39:55.110229    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:39:55.121824    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:39:55.121834    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:39:55.133695    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:39:55.133705    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:55.145108    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:39:55.145119    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:39:55.159542    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:39:55.159556    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:39:55.175249    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:55.175260    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:55.210015    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:39:55.210026    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:39:55.224822    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:39:55.224833    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:39:55.239394    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:39:55.239403    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:39:55.250964    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:39:55.250975    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:39:55.268748    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:55.268757    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:55.293779    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:55.293788    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:55.328327    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:55.328336    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:57.835128    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:56.689741    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:56.690044    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:56.722052    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:39:56.722187    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:56.741262    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:39:56.741352    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:56.755101    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:39:56.755183    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:56.767093    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:39:56.767168    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:56.779171    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:39:56.779245    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:56.790055    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:39:56.790129    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:56.800345    7676 logs.go:276] 0 containers: []
	W0610 03:39:56.800356    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:56.800415    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:56.811119    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:39:56.811140    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:39:56.811146    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:39:56.832697    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:39:56.832708    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:39:56.844986    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:56.844998    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:56.868017    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:39:56.868025    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:39:56.882197    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:39:56.882208    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:39:56.898369    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:39:56.898380    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:39:56.910291    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:39:56.910301    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:39:56.922140    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:56.922151    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:56.962029    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:56.962040    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:56.966765    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:39:56.966771    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:39:57.004244    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:39:57.004254    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:39:57.017882    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:39:57.017891    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:39:57.028645    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:39:57.028657    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:39:57.043147    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:39:57.043158    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:39:57.054814    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:39:57.054824    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:57.067396    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:57.067407    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:57.104888    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:39:57.104899    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:40:02.837698    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:02.838064    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:02.868109    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:40:02.868243    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:02.887600    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:40:02.887685    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:02.902825    7510 logs.go:276] 2 containers: [89f9105df8f8 5fc29143e075]
	I0610 03:40:02.902907    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:02.919460    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:40:02.919532    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:02.930367    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:40:02.930439    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:02.940677    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:40:02.940743    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:02.954944    7510 logs.go:276] 0 containers: []
	W0610 03:40:02.954956    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:02.955013    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:02.966132    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:40:02.966151    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:40:02.966157    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:40:02.981280    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:40:02.981293    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:40:02.992983    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:40:02.992994    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:59.621042    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:03.004713    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:03.004723    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:03.042363    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:40:03.042375    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:40:03.054264    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:40:03.054275    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:40:03.065968    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:40:03.065979    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:40:03.086245    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:40:03.086258    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:40:03.097980    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:40:03.097992    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:40:03.116017    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:03.116030    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:03.140673    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:03.140683    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:03.175036    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:03.175045    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:03.179516    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:40:03.179524    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:40:05.696176    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:04.623444    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:04.623737    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:04.645626    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:40:04.645735    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:04.662856    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:40:04.662931    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:04.675374    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:40:04.675438    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:04.686250    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:40:04.686322    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:04.696740    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:40:04.696802    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:04.707110    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:40:04.707174    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:04.721413    7676 logs.go:276] 0 containers: []
	W0610 03:40:04.721423    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:04.721474    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:04.732511    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:40:04.732529    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:40:04.732534    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:40:04.746308    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:40:04.746320    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:40:04.766212    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:40:04.766227    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:40:04.780795    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:04.780806    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:04.803230    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:40:04.803237    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:04.815931    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:04.815941    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:04.820244    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:04.820251    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:04.854078    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:40:04.854090    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:40:04.866149    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:40:04.866160    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:40:04.905735    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:40:04.905751    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:40:04.919979    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:40:04.919995    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:40:04.937715    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:40:04.937726    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:40:04.951565    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:04.951577    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:04.990365    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:40:04.990385    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:40:05.003392    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:40:05.003405    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:40:05.015376    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:40:05.015387    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:40:05.029426    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:40:05.029437    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:40:07.543098    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:10.698522    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:10.698780    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:10.726107    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:40:10.726228    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:10.743404    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:40:10.743492    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:10.757278    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:40:10.757352    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:10.768967    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:40:10.769024    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:10.779153    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:40:10.779224    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:10.789358    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:40:10.789429    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:10.803335    7510 logs.go:276] 0 containers: []
	W0610 03:40:10.803344    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:10.803395    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:10.813844    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:40:10.813863    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:10.813868    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:10.848286    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:10.848295    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:10.853052    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:40:10.853058    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:40:10.867471    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:10.867483    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:10.891446    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:40:10.891452    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:40:10.903452    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:40:10.903467    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:40:10.915290    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:40:10.915301    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:40:10.927576    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:40:10.927587    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:40:10.942573    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:40:10.942582    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:40:10.959368    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:40:10.959379    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:40:10.970394    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:40:10.970407    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:40:10.981911    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:40:10.981923    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:10.996883    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:10.996896    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:11.031356    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:40:11.031370    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:40:11.045117    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:40:11.045128    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:40:12.545916    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:12.546267    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:12.581681    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:40:12.581819    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:12.608058    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:40:12.608147    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:12.620855    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:40:12.620928    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:12.636453    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:40:12.636529    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:12.648088    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:40:12.648159    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:12.659064    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:40:12.659131    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:12.669522    7676 logs.go:276] 0 containers: []
	W0610 03:40:12.669534    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:12.669593    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:12.680459    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:40:12.680475    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:12.680482    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:12.717755    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:12.717764    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:12.721781    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:40:12.721787    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:40:12.760871    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:40:12.760882    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:40:12.775839    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:40:12.775848    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:40:12.793151    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:40:12.793163    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:12.804620    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:40:12.804631    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:40:12.820119    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:40:12.820129    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:40:12.841477    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:40:12.841487    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:40:12.855491    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:40:12.855502    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:40:12.867737    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:12.867748    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:12.902959    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:40:12.902971    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:40:12.914790    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:40:12.914801    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:40:12.926009    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:12.926022    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:12.950300    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:40:12.950310    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:40:12.964610    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:40:12.964620    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:40:12.978083    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:40:12.978093    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:40:13.557992    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:15.490637    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:18.560461    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:18.560812    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:18.590937    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:40:18.591069    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:18.609231    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:40:18.609334    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:18.622868    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:40:18.622945    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:18.634157    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:40:18.634235    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:18.644154    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:40:18.644220    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:18.654955    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:40:18.655023    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:18.665000    7510 logs.go:276] 0 containers: []
	W0610 03:40:18.665011    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:18.665071    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:18.675617    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:40:18.675637    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:40:18.675642    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:40:18.686994    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:40:18.687008    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:40:18.706761    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:40:18.706772    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:40:18.722680    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:18.722691    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:18.747839    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:18.747847    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:18.752708    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:18.752716    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:18.787552    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:40:18.787568    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:40:18.801433    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:40:18.801444    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:18.813273    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:40:18.813284    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:40:18.825629    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:40:18.825642    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:40:18.837198    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:40:18.837214    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:40:18.855007    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:40:18.855016    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:40:18.869893    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:18.869903    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:18.902853    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:40:18.902864    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:40:18.916923    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:40:18.916932    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:40:21.430146    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:20.491079    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:20.491199    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:20.503611    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:40:20.503681    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:20.515817    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:40:20.515888    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:20.526464    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:40:20.526535    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:20.537360    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:40:20.537434    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:20.547407    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:40:20.547479    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:20.561253    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:40:20.561321    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:20.571718    7676 logs.go:276] 0 containers: []
	W0610 03:40:20.571730    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:20.571792    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:20.582275    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:40:20.582293    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:40:20.582298    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:40:20.600377    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:40:20.600388    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:40:20.614639    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:40:20.614653    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:40:20.626104    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:40:20.626114    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:40:20.641035    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:40:20.641045    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:40:20.657466    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:20.657477    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:20.695408    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:40:20.695418    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:40:20.733352    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:40:20.733362    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:40:20.747922    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:40:20.747933    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:40:20.759437    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:20.759449    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:20.783507    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:40:20.783517    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:20.796079    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:20.796096    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:20.800441    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:40:20.800449    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:40:20.811463    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:40:20.811478    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:40:20.822739    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:20.822750    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:20.857603    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:40:20.857618    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:40:20.872010    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:40:20.872021    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:40:23.391594    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:26.432574    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:26.432853    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:26.461603    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:40:26.461736    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:26.480273    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:40:26.480364    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:26.494045    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:40:26.494116    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:26.505262    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:40:26.505331    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:26.515802    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:40:26.515867    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:26.526577    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:40:26.526649    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:26.543245    7510 logs.go:276] 0 containers: []
	W0610 03:40:26.543257    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:26.543314    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:26.553175    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:40:26.553191    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:40:26.553197    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:40:26.565516    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:40:26.565528    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:40:26.577260    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:40:26.577274    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:40:26.592125    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:40:26.592137    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:40:26.609847    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:40:26.609856    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:40:26.624277    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:26.624290    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:26.658574    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:40:26.658585    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:40:26.676203    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:40:26.676214    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:26.688478    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:26.688489    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:26.722555    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:26.722562    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:26.747126    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:40:26.747133    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:40:26.764685    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:40:26.764696    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:40:26.776219    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:40:26.776229    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:40:26.787617    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:40:26.787628    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:40:26.799216    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:26.799231    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:28.394183    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:28.394342    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:28.408462    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:40:28.408536    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:28.427687    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:40:28.427758    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:28.442132    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:40:28.442200    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:29.305735    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:28.452642    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:40:28.456441    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:28.466825    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:40:28.466898    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:28.477344    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:40:28.477412    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:28.487107    7676 logs.go:276] 0 containers: []
	W0610 03:40:28.487118    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:28.487169    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:28.497959    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:40:28.497977    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:40:28.497983    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:28.509858    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:28.509869    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:28.546799    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:40:28.546807    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:40:28.565187    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:40:28.565198    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:40:28.575641    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:40:28.575652    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:40:28.587619    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:40:28.587631    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:40:28.600159    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:40:28.600171    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:40:28.612008    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:40:28.612019    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:40:28.626506    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:40:28.626516    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:40:28.638015    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:28.638031    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:28.642226    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:40:28.642232    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:40:28.656134    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:40:28.656145    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:40:28.674392    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:40:28.674403    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:40:28.688500    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:28.688511    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:28.722903    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:40:28.722915    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:40:28.761052    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:40:28.761062    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:40:28.778811    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:28.778822    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:31.304908    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:34.308004    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:34.308165    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:34.324131    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:40:34.324202    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:34.336835    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:40:34.336905    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:34.348790    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:40:34.348863    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:34.359182    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:40:34.359244    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:34.371092    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:40:34.371155    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:34.381431    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:40:34.381497    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:34.391772    7510 logs.go:276] 0 containers: []
	W0610 03:40:34.391784    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:34.391840    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:34.404998    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:40:34.405017    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:34.405022    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:34.409482    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:34.409491    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:34.443945    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:34.443956    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:34.468728    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:40:34.468735    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:40:34.479776    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:40:34.479786    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:40:34.492037    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:40:34.492047    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:40:34.507069    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:40:34.507077    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:40:34.530135    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:40:34.530145    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:40:34.544185    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:40:34.544195    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:40:34.556188    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:40:34.556199    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:40:34.568026    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:40:34.568036    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:40:34.585684    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:34.585694    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:34.619701    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:40:34.619707    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:40:34.635382    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:40:34.635393    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:40:34.648024    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:40:34.648034    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:37.170132    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:36.306653    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:36.306870    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:36.331430    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:40:36.331516    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:36.345532    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:40:36.345604    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:36.357339    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:40:36.357409    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:36.367877    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:40:36.367948    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:36.377956    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:40:36.378028    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:36.388756    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:40:36.388816    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:36.399806    7676 logs.go:276] 0 containers: []
	W0610 03:40:36.399822    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:36.399898    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:36.410540    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:40:36.410557    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:40:36.410563    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:40:36.425487    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:40:36.425497    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:40:36.436769    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:40:36.436782    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:40:36.448280    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:40:36.448292    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:36.460619    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:36.460629    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:36.498787    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:40:36.498796    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:40:36.514359    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:40:36.514370    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:40:36.528170    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:40:36.528181    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:40:36.540064    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:40:36.540075    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:40:36.553988    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:36.553999    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:36.578421    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:36.578431    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:36.582978    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:36.582984    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:36.618035    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:40:36.618046    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:40:36.629713    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:40:36.629724    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:40:36.641093    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:40:36.641105    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:40:36.666957    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:40:36.666968    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:40:36.710124    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:40:36.710135    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:40:42.172613    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:42.172950    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:42.206583    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:40:42.206717    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:42.226360    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:40:42.226454    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:42.241263    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:40:42.241353    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:42.253767    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:40:42.253837    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:42.264484    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:40:42.264550    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:42.275561    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:40:42.275636    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:42.287107    7510 logs.go:276] 0 containers: []
	W0610 03:40:42.287117    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:42.287172    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:42.297735    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:40:42.297756    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:42.297761    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:42.302274    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:40:42.302282    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:40:42.317118    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:40:42.317129    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:40:42.334270    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:42.334281    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:42.359547    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:40:42.359558    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:40:42.374460    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:40:42.374470    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:40:42.386327    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:40:42.386343    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:40:42.403858    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:40:42.403869    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:42.415783    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:42.415795    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:42.450066    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:42.450073    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:42.484190    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:40:42.484201    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:40:42.498986    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:40:42.498997    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:40:42.510788    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:40:42.510799    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:40:42.527296    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:40:42.527306    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:40:42.539052    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:40:42.539062    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:40:39.226274    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:45.052870    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:44.228611    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:44.228860    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:44.257618    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:40:44.257722    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:44.274533    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:40:44.274614    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:44.288070    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:40:44.288155    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:44.299673    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:40:44.299748    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:44.310018    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:40:44.310087    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:44.321052    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:40:44.321118    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:44.334927    7676 logs.go:276] 0 containers: []
	W0610 03:40:44.334940    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:44.335000    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:44.345459    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:40:44.345479    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:44.345484    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:44.385689    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:44.385697    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:44.389893    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:40:44.389901    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:40:44.403635    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:40:44.403646    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:40:44.415048    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:44.415060    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:44.439077    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:44.439084    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:44.476746    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:40:44.476756    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:40:44.491215    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:40:44.491227    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:40:44.505877    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:40:44.505886    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:40:44.517272    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:40:44.517285    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:40:44.531179    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:40:44.531190    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:40:44.547845    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:40:44.547858    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:40:44.561556    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:40:44.561570    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:40:44.572825    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:40:44.572837    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:40:44.587342    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:40:44.587355    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:40:44.623727    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:40:44.623737    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:40:44.643451    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:40:44.643468    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:47.158840    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:50.055170    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:50.055412    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:50.072297    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:40:50.072371    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:50.086139    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:40:50.086212    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:50.097290    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:40:50.097356    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:50.114534    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:40:50.114606    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:50.124995    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:40:50.125065    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:50.135271    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:40:50.135340    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:50.146001    7510 logs.go:276] 0 containers: []
	W0610 03:40:50.146012    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:50.146071    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:50.156566    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:40:50.156582    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:50.156587    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:50.190760    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:40:50.190773    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:40:50.204938    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:40:50.204951    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:40:50.216363    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:50.216377    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:50.239839    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:50.239846    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:50.244137    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:40:50.244147    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:40:50.256358    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:40:50.256369    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:40:50.271180    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:40:50.271193    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:40:50.290332    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:40:50.290346    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:40:50.309771    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:40:50.309781    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:40:50.321165    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:40:50.321177    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:40:50.332634    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:50.332646    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:50.367738    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:40:50.367753    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:40:50.389364    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:40:50.389375    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:50.401264    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:40:50.401273    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:40:52.919186    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:52.159958    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:52.160154    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:52.179522    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:40:52.179605    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:52.194467    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:40:52.194544    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:52.206074    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:40:52.206138    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:52.216889    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:40:52.216957    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:52.227488    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:40:52.227554    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:52.238216    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:40:52.238280    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:52.257238    7676 logs.go:276] 0 containers: []
	W0610 03:40:52.257251    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:52.257314    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:52.275181    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:40:52.275200    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:40:52.275205    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:40:52.296383    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:52.296396    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:52.332888    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:40:52.332899    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:40:52.344361    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:40:52.344374    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:40:52.361119    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:40:52.361130    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:40:52.372851    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:40:52.372863    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:40:52.390227    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:40:52.390237    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:40:52.427419    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:40:52.427431    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:40:52.441584    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:40:52.441597    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:40:52.456185    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:40:52.456196    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:40:52.473814    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:40:52.473825    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:40:52.485943    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:40:52.485953    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:40:52.497569    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:52.497579    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:52.520554    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:40:52.520563    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:52.531807    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:52.531818    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:52.569644    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:52.569655    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:52.573655    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:40:52.573662    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:40:57.921516    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:57.921656    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:57.932072    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:40:57.932143    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:57.946357    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:40:57.946425    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:57.956873    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:40:57.956937    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:57.967414    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:40:57.967476    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:57.978199    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:40:57.978267    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:57.988659    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:40:57.988720    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:55.093459    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:58.004302    7510 logs.go:276] 0 containers: []
	W0610 03:40:58.004503    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:58.004559    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:58.015220    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:40:58.015237    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:58.015243    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:58.020335    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:58.020342    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:58.059584    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:40:58.059595    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:40:58.074227    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:40:58.074239    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:40:58.086316    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:40:58.086326    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:40:58.097809    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:40:58.097818    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:40:58.123018    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:40:58.123031    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:40:58.134484    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:40:58.134495    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:40:58.148451    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:58.148462    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:58.173040    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:58.173049    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:58.206681    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:40:58.206692    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:40:58.218262    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:40:58.218272    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:40:58.229477    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:40:58.229487    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:40:58.240863    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:40:58.240872    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:40:58.255229    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:40:58.255241    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:00.768547    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:00.096092    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:00.096334    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:41:00.117933    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:41:00.118050    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:41:00.136289    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:41:00.136369    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:41:00.148389    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:41:00.148455    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:41:00.166361    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:41:00.166430    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:41:00.177220    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:41:00.177293    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:41:00.187344    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:41:00.187408    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:41:00.202739    7676 logs.go:276] 0 containers: []
	W0610 03:41:00.202751    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:41:00.202812    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:41:00.213155    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:41:00.213174    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:41:00.213179    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:41:00.224615    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:41:00.224627    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:41:00.238339    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:41:00.238349    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:41:00.275745    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:41:00.275757    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:41:00.292540    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:41:00.292550    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:41:00.304608    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:41:00.304619    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:41:00.315545    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:41:00.315557    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:00.328548    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:41:00.328560    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:41:00.364939    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:41:00.364952    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:41:00.379523    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:41:00.379532    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:41:00.391496    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:41:00.391508    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:41:00.409941    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:41:00.409951    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:41:00.429521    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:41:00.429533    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:41:00.441944    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:41:00.441957    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:41:00.480616    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:41:00.480627    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:41:00.501018    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:41:00.501029    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:41:00.522931    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:41:00.522938    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:41:03.029136    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:05.770894    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:05.771112    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:41:05.792247    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:41:05.792341    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:41:05.811910    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:41:05.811982    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:41:05.825597    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:41:05.825668    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:41:05.841370    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:41:05.841439    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:41:05.853017    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:41:05.853085    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:41:05.863546    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:41:05.863618    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:41:05.873465    7510 logs.go:276] 0 containers: []
	W0610 03:41:05.873477    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:41:05.873533    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:41:05.883806    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:41:05.883826    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:41:05.883832    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:41:05.898203    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:41:05.898213    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:41:05.910009    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:41:05.910020    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:41:05.924751    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:41:05.924761    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:41:05.942122    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:41:05.942132    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:41:05.954218    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:41:05.954235    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:05.966086    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:41:05.966101    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:41:05.998820    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:41:05.998830    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:41:06.041163    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:41:06.041175    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:41:06.052939    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:41:06.052956    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:41:06.064558    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:41:06.064571    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:41:06.088763    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:41:06.088770    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:41:06.093010    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:41:06.093016    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:41:06.109258    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:41:06.109268    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:41:06.120807    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:41:06.120816    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:41:08.031501    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:08.031660    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:41:08.045073    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:41:08.045151    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:41:08.056531    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:41:08.056599    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:41:08.067216    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:41:08.067279    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:41:08.077980    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:41:08.078065    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:41:08.091108    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:41:08.091171    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:41:08.101801    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:41:08.101868    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:41:08.112284    7676 logs.go:276] 0 containers: []
	W0610 03:41:08.112296    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:41:08.112358    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:41:08.122485    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:41:08.122504    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:41:08.122509    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:41:08.126966    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:41:08.126973    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:41:08.137849    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:41:08.137863    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:41:08.155556    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:41:08.155566    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:41:08.169455    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:41:08.169467    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:41:08.184040    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:41:08.184052    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:41:08.226601    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:41:08.226613    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:41:08.240936    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:41:08.240947    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:41:08.252688    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:41:08.252698    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:41:08.264634    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:41:08.264647    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:41:08.302344    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:41:08.302357    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:41:08.340345    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:41:08.340357    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:41:08.352025    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:41:08.352038    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:41:08.366659    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:41:08.366669    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:41:08.385846    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:41:08.385857    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:41:08.409817    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:41:08.409828    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:41:08.431802    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:41:08.431810    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:08.634408    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:10.944860    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:13.635903    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:13.636504    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:41:13.679593    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:41:13.679723    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:41:13.697912    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:41:13.698010    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:41:13.711937    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:41:13.712012    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:41:13.724011    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:41:13.724079    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:41:13.739484    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:41:13.739551    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:41:13.754535    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:41:13.754601    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:41:13.764377    7510 logs.go:276] 0 containers: []
	W0610 03:41:13.764387    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:41:13.764436    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:41:13.776209    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:41:13.776228    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:41:13.776233    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:41:13.788606    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:41:13.788618    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:41:13.800416    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:41:13.800427    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:41:13.812197    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:41:13.812208    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:41:13.824557    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:41:13.824567    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:41:13.839479    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:41:13.839491    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:41:13.844282    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:41:13.844289    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:41:13.878540    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:41:13.878551    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:41:13.898787    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:41:13.898797    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:41:13.910393    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:41:13.910404    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:41:13.929225    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:41:13.929237    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:41:13.940840    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:41:13.940851    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:41:13.964793    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:41:13.964802    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:13.976621    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:41:13.976636    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:41:14.009926    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:41:14.009935    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:41:16.529218    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:15.947222    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:15.947364    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:41:15.962436    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:41:15.962522    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:41:15.976524    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:41:15.976594    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:41:15.986827    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:41:15.986898    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:41:15.996477    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:41:15.996551    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:41:16.013932    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:41:16.013997    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:41:16.025617    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:41:16.025684    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:41:16.035618    7676 logs.go:276] 0 containers: []
	W0610 03:41:16.035631    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:41:16.035684    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:41:16.046757    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:41:16.046777    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:41:16.046783    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:41:16.082740    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:41:16.082752    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:41:16.096486    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:41:16.096496    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:41:16.110495    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:41:16.110504    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:41:16.130361    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:41:16.130374    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:41:16.134437    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:41:16.134446    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:41:16.171318    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:41:16.171333    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:41:16.182270    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:41:16.182286    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:41:16.198969    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:41:16.198979    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:41:16.210312    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:41:16.210322    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:41:16.221683    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:41:16.221695    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:41:16.258788    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:41:16.258799    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:41:16.274281    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:41:16.274293    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:41:16.285996    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:41:16.286007    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:41:16.300955    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:41:16.300970    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:41:16.318230    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:41:16.318241    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:41:16.339914    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:41:16.339925    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:21.531615    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:21.531911    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:41:21.570019    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:41:21.570120    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:41:21.585465    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:41:21.585546    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:41:21.597812    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:41:21.597879    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:41:21.608622    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:41:21.608702    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:41:21.623129    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:41:21.623201    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:41:21.634501    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:41:21.634563    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:41:21.648326    7510 logs.go:276] 0 containers: []
	W0610 03:41:21.648339    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:41:21.648395    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:41:21.658787    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:41:21.658805    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:41:21.658812    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:41:21.694205    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:41:21.694218    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:41:21.730999    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:41:21.731013    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:41:21.745550    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:41:21.745562    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:41:21.756524    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:41:21.756538    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:41:21.768155    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:41:21.768169    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:41:21.785483    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:41:21.785494    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:41:21.789975    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:41:21.789984    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:21.801442    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:41:21.801456    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:41:21.814875    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:41:21.814887    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:41:21.827025    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:41:21.827037    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:41:21.838600    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:41:21.838611    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:41:21.853487    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:41:21.853497    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:41:21.865341    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:41:21.865352    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:41:21.877109    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:41:21.877121    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:41:18.853598    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:24.404575    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:23.854379    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:23.854521    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:41:23.871118    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:41:23.871205    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:41:23.883781    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:41:23.883859    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:41:23.894649    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:41:23.894712    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:41:23.905333    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:41:23.905409    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:41:23.915294    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:41:23.915363    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:41:23.925216    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:41:23.925283    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:41:23.937448    7676 logs.go:276] 0 containers: []
	W0610 03:41:23.937460    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:41:23.937523    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:41:23.948592    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:41:23.948610    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:41:23.948616    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:41:23.960525    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:41:23.960537    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:41:23.984412    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:41:23.984426    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:41:23.988859    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:41:23.988868    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:41:24.029000    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:41:24.029015    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:41:24.043441    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:41:24.043453    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:41:24.081015    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:41:24.081029    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:41:24.095349    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:41:24.095359    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:41:24.106900    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:41:24.106914    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:41:24.118620    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:41:24.118631    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:41:24.136211    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:41:24.136222    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:41:24.173125    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:41:24.173133    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:41:24.206894    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:41:24.206912    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:41:24.221446    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:41:24.221458    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:41:24.234907    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:41:24.234919    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:24.250335    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:41:24.250346    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:41:24.262791    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:41:24.262803    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:41:26.779255    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:31.781815    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:31.781918    7676 kubeadm.go:591] duration metric: took 4m3.782328583s to restartPrimaryControlPlane
	W0610 03:41:31.781989    7676 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0610 03:41:31.782023    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0610 03:41:32.832666    7676 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.050616792s)
	I0610 03:41:32.832736    7676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 03:41:32.837705    7676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 03:41:32.840560    7676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 03:41:32.843427    7676 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 03:41:32.843433    7676 kubeadm.go:156] found existing configuration files:
	
	I0610 03:41:32.843456    7676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/admin.conf
	I0610 03:41:32.845898    7676 kubeadm.go:162] "https://control-plane.minikube.internal:51320" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 03:41:32.845918    7676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 03:41:32.848517    7676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/kubelet.conf
	I0610 03:41:32.851679    7676 kubeadm.go:162] "https://control-plane.minikube.internal:51320" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 03:41:32.851702    7676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 03:41:32.854714    7676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/controller-manager.conf
	I0610 03:41:32.857221    7676 kubeadm.go:162] "https://control-plane.minikube.internal:51320" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 03:41:32.857248    7676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 03:41:32.860344    7676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/scheduler.conf
	I0610 03:41:32.863353    7676 kubeadm.go:162] "https://control-plane.minikube.internal:51320" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 03:41:32.863379    7676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 03:41:32.865850    7676 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 03:41:32.883354    7676 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0610 03:41:32.883389    7676 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 03:41:32.931115    7676 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 03:41:32.931191    7676 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 03:41:32.931271    7676 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 03:41:32.980988    7676 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 03:41:29.406917    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:29.407114    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:41:29.425505    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:41:29.425598    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:41:29.438806    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:41:29.438884    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:41:29.450201    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:41:29.450270    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:41:29.460933    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:41:29.460997    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:41:29.471628    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:41:29.471692    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:41:29.481858    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:41:29.481919    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:41:29.492797    7510 logs.go:276] 0 containers: []
	W0610 03:41:29.492815    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:41:29.492871    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:41:29.505267    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:41:29.505286    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:41:29.505292    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:41:29.520252    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:41:29.520267    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:41:29.546950    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:41:29.546961    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:29.558736    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:41:29.558750    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:41:29.579716    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:41:29.579731    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:41:29.591600    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:41:29.591614    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:41:29.603710    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:41:29.603721    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:41:29.615230    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:41:29.615244    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:41:29.626198    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:41:29.626213    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:41:29.650182    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:41:29.650190    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:41:29.682848    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:41:29.682859    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:41:29.694549    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:41:29.694564    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:41:29.705830    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:41:29.705839    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:41:29.710905    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:41:29.710912    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:41:29.745180    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:41:29.745194    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:41:32.260399    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:32.988121    7676 out.go:204]   - Generating certificates and keys ...
	I0610 03:41:32.988195    7676 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 03:41:32.988244    7676 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 03:41:32.988359    7676 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 03:41:32.988472    7676 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 03:41:32.988510    7676 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 03:41:32.988538    7676 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 03:41:32.988571    7676 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 03:41:32.988616    7676 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 03:41:32.988658    7676 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 03:41:32.988710    7676 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 03:41:32.988745    7676 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 03:41:32.988847    7676 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 03:41:33.129420    7676 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 03:41:33.243038    7676 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 03:41:33.344548    7676 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 03:41:33.440698    7676 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 03:41:33.468982    7676 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 03:41:33.469524    7676 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 03:41:33.469563    7676 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 03:41:33.544616    7676 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 03:41:37.262803    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:37.262988    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:41:37.276153    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:41:37.276233    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:41:37.289973    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:41:37.290044    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:41:37.301014    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:41:37.301100    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:41:37.311856    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:41:37.311923    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:41:37.322006    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:41:37.322068    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:41:37.332492    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:41:37.332554    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:41:37.342278    7510 logs.go:276] 0 containers: []
	W0610 03:41:37.342291    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:41:37.342351    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:41:37.353296    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:41:37.353315    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:41:37.353321    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:41:37.365677    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:41:37.365690    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:41:37.377548    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:41:37.377559    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:41:37.389320    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:41:37.389335    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:41:37.408291    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:41:37.408302    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:41:37.412554    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:41:37.412560    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:41:37.446913    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:41:37.446924    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:41:37.461453    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:41:37.461462    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:41:37.476511    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:41:37.476520    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:41:37.488094    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:41:37.488108    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:37.499294    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:41:37.499304    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:41:37.512887    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:41:37.512899    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:41:37.524843    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:41:37.524853    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:41:37.548291    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:41:37.548299    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:41:37.580349    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:41:37.580358    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:41:33.548521    7676 out.go:204]   - Booting up control plane ...
	I0610 03:41:33.548571    7676 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 03:41:33.548633    7676 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 03:41:33.548664    7676 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 03:41:33.548716    7676 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 03:41:33.548850    7676 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 03:41:38.051355    7676 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.503431 seconds
	I0610 03:41:38.051420    7676 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 03:41:38.055105    7676 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 03:41:38.571784    7676 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 03:41:38.572068    7676 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-390000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 03:41:39.075469    7676 kubeadm.go:309] [bootstrap-token] Using token: 6it540.knfmkd9jycdzv6dz
	I0610 03:41:39.081994    7676 out.go:204]   - Configuring RBAC rules ...
	I0610 03:41:39.082049    7676 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 03:41:39.082089    7676 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 03:41:39.088979    7676 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 03:41:39.089810    7676 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 03:41:39.090714    7676 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 03:41:39.091515    7676 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 03:41:39.094742    7676 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 03:41:39.270464    7676 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 03:41:39.479089    7676 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 03:41:39.479672    7676 kubeadm.go:309] 
	I0610 03:41:39.479701    7676 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 03:41:39.479704    7676 kubeadm.go:309] 
	I0610 03:41:39.479745    7676 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 03:41:39.479758    7676 kubeadm.go:309] 
	I0610 03:41:39.479772    7676 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 03:41:39.479819    7676 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 03:41:39.479854    7676 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 03:41:39.479858    7676 kubeadm.go:309] 
	I0610 03:41:39.479892    7676 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 03:41:39.479894    7676 kubeadm.go:309] 
	I0610 03:41:39.479918    7676 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 03:41:39.479922    7676 kubeadm.go:309] 
	I0610 03:41:39.479953    7676 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 03:41:39.479996    7676 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 03:41:39.480033    7676 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 03:41:39.480036    7676 kubeadm.go:309] 
	I0610 03:41:39.480080    7676 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 03:41:39.480119    7676 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 03:41:39.480123    7676 kubeadm.go:309] 
	I0610 03:41:39.480162    7676 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6it540.knfmkd9jycdzv6dz \
	I0610 03:41:39.480210    7676 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f7fe6ae71b856fd6b6179c41fff2157e8fd728e5d925a1fc919a0499149ebdbb \
	I0610 03:41:39.480222    7676 kubeadm.go:309] 	--control-plane 
	I0610 03:41:39.480225    7676 kubeadm.go:309] 
	I0610 03:41:39.480273    7676 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 03:41:39.480276    7676 kubeadm.go:309] 
	I0610 03:41:39.480313    7676 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6it540.knfmkd9jycdzv6dz \
	I0610 03:41:39.480360    7676 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f7fe6ae71b856fd6b6179c41fff2157e8fd728e5d925a1fc919a0499149ebdbb 
	I0610 03:41:39.480542    7676 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 03:41:39.480550    7676 cni.go:84] Creating CNI manager for ""
	I0610 03:41:39.480557    7676 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:41:39.481918    7676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 03:41:39.488708    7676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 03:41:39.491598    7676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 03:41:39.497462    7676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 03:41:39.497547    7676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-390000 minikube.k8s.io/updated_at=2024_06_10T03_41_39_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=stopped-upgrade-390000 minikube.k8s.io/primary=true
	I0610 03:41:39.497591    7676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 03:41:39.508634    7676 ops.go:34] apiserver oom_adj: -16
	I0610 03:41:39.541362    7676 kubeadm.go:1107] duration metric: took 43.80525ms to wait for elevateKubeSystemPrivileges
	W0610 03:41:39.541505    7676 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 03:41:39.541511    7676 kubeadm.go:393] duration metric: took 4m11.555618292s to StartCluster
	I0610 03:41:39.541521    7676 settings.go:142] acquiring lock: {Name:mke35f292ed93eff7117a159773dd0e114b7dd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:41:39.541610    7676 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:41:39.542006    7676 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/kubeconfig: {Name:mke25032b58aa44d6357ccc49c0a5254f131209e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:41:39.542189    7676 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:41:39.546637    7676 out.go:177] * Verifying Kubernetes components...
	I0610 03:41:39.542231    7676 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 03:41:39.542282    7676 config.go:182] Loaded profile config "stopped-upgrade-390000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 03:41:39.554599    7676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 03:41:39.554602    7676 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-390000"
	I0610 03:41:39.554604    7676 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-390000"
	I0610 03:41:39.554616    7676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-390000"
	I0610 03:41:39.554618    7676 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-390000"
	W0610 03:41:39.554622    7676 addons.go:243] addon storage-provisioner should already be in state true
	I0610 03:41:39.554634    7676 host.go:66] Checking if "stopped-upgrade-390000" exists ...
	I0610 03:41:39.559641    7676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 03:41:40.100132    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:39.563540    7676 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 03:41:39.563546    7676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 03:41:39.563552    7676 sshutil.go:53] new ssh client: &{IP:localhost Port:51286 SSHKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/stopped-upgrade-390000/id_rsa Username:docker}
	I0610 03:41:39.564503    7676 kapi.go:59] client config for stopped-upgrade-390000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/client.key", CAFile:"/Users/jenkins/minikube-integration/19046-4812/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101f80460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 03:41:39.564631    7676 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-390000"
	W0610 03:41:39.564638    7676 addons.go:243] addon default-storageclass should already be in state true
	I0610 03:41:39.564648    7676 host.go:66] Checking if "stopped-upgrade-390000" exists ...
	I0610 03:41:39.565453    7676 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 03:41:39.565459    7676 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 03:41:39.565463    7676 sshutil.go:53] new ssh client: &{IP:localhost Port:51286 SSHKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/stopped-upgrade-390000/id_rsa Username:docker}
	I0610 03:41:39.648031    7676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 03:41:39.653965    7676 api_server.go:52] waiting for apiserver process to appear ...
	I0610 03:41:39.654009    7676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 03:41:39.658409    7676 api_server.go:72] duration metric: took 116.207583ms to wait for apiserver process to appear ...
	I0610 03:41:39.658417    7676 api_server.go:88] waiting for apiserver healthz status ...
	I0610 03:41:39.658424    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:39.663581    7676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 03:41:39.731413    7676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 03:41:45.102487    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:45.102753    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:41:45.144427    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:41:45.144549    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:41:45.165567    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:41:45.165640    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:41:45.176744    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:41:45.176816    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:41:45.187792    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:41:45.187857    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:41:45.197988    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:41:45.198052    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:41:45.213055    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:41:45.213123    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:41:45.223642    7510 logs.go:276] 0 containers: []
	W0610 03:41:45.223654    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:41:45.223712    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:41:45.234001    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:41:45.234017    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:41:45.234023    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:41:45.246711    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:41:45.246723    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:41:45.281389    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:41:45.281402    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:41:45.296873    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:41:45.296887    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:41:45.311814    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:41:45.311828    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:41:45.328779    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:41:45.328788    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:41:45.352422    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:41:45.352453    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:41:45.387224    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:41:45.387235    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:41:45.402518    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:41:45.402532    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:41:45.414020    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:41:45.414030    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:41:45.425558    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:41:45.425571    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:41:45.439906    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:41:45.439916    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:41:45.451446    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:41:45.451460    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:41:45.463017    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:41:45.463031    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:45.474753    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:41:45.474763    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:41:47.980753    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:44.660646    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:44.660699    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:52.983006    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:52.983109    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:41:52.995488    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:41:52.995560    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:41:49.661216    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:49.661255    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:53.006052    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:41:53.006118    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:41:53.018124    7510 logs.go:276] 4 containers: [813ec2d6967d dc4fe8f226c8 89f9105df8f8 5fc29143e075]
	I0610 03:41:53.018195    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:41:53.029171    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:41:53.029244    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:41:53.040115    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:41:53.040182    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:41:53.051054    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:41:53.051130    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:41:53.061631    7510 logs.go:276] 0 containers: []
	W0610 03:41:53.061643    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:41:53.061701    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:41:53.073585    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:41:53.073605    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:41:53.073611    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:41:53.109177    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:41:53.109199    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:41:53.125407    7510 logs.go:123] Gathering logs for coredns [5fc29143e075] ...
	I0610 03:41:53.125419    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fc29143e075"
	I0610 03:41:53.139901    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:41:53.139914    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:41:53.159195    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:41:53.159211    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:41:53.172960    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:41:53.172973    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:53.185893    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:41:53.185910    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:41:53.191053    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:41:53.191067    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:41:53.230977    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:41:53.230993    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:41:53.244509    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:41:53.244522    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:41:53.259016    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:41:53.259029    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:41:53.284654    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:41:53.284670    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:41:53.299436    7510 logs.go:123] Gathering logs for coredns [89f9105df8f8] ...
	I0610 03:41:53.299448    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89f9105df8f8"
	I0610 03:41:53.314133    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:41:53.314146    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:41:53.342461    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:41:53.342473    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:41:55.856655    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:54.661696    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:54.661717    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:42:00.858973    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:42:00.859162    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:42:00.877328    7510 logs.go:276] 1 containers: [824f9b6a1778]
	I0610 03:42:00.877404    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:42:00.890549    7510 logs.go:276] 1 containers: [e28cbda74696]
	I0610 03:42:00.890615    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:42:00.901654    7510 logs.go:276] 4 containers: [b082e39dd8f4 9a2bad93756d 813ec2d6967d dc4fe8f226c8]
	I0610 03:42:00.901728    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:42:00.911809    7510 logs.go:276] 1 containers: [3f33f1ff9491]
	I0610 03:42:00.911882    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:42:00.922137    7510 logs.go:276] 1 containers: [6cef1607a60f]
	I0610 03:42:00.922211    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:42:00.933755    7510 logs.go:276] 1 containers: [4408982567d4]
	I0610 03:42:00.933823    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:42:00.943678    7510 logs.go:276] 0 containers: []
	W0610 03:42:00.943689    7510 logs.go:278] No container was found matching "kindnet"
	I0610 03:42:00.943744    7510 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:42:00.954506    7510 logs.go:276] 1 containers: [8d03b3991df8]
	I0610 03:42:00.954523    7510 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:42:00.954528    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:42:00.991120    7510 logs.go:123] Gathering logs for coredns [813ec2d6967d] ...
	I0610 03:42:00.991130    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 813ec2d6967d"
	I0610 03:42:01.002949    7510 logs.go:123] Gathering logs for storage-provisioner [8d03b3991df8] ...
	I0610 03:42:01.002960    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d03b3991df8"
	I0610 03:42:01.014202    7510 logs.go:123] Gathering logs for container status ...
	I0610 03:42:01.014213    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:42:01.025762    7510 logs.go:123] Gathering logs for dmesg ...
	I0610 03:42:01.025772    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:42:01.030551    7510 logs.go:123] Gathering logs for coredns [9a2bad93756d] ...
	I0610 03:42:01.030560    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a2bad93756d"
	I0610 03:42:01.041979    7510 logs.go:123] Gathering logs for coredns [dc4fe8f226c8] ...
	I0610 03:42:01.041992    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc4fe8f226c8"
	I0610 03:42:01.053808    7510 logs.go:123] Gathering logs for kube-proxy [6cef1607a60f] ...
	I0610 03:42:01.053819    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cef1607a60f"
	I0610 03:42:01.065479    7510 logs.go:123] Gathering logs for Docker ...
	I0610 03:42:01.065490    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:42:01.090175    7510 logs.go:123] Gathering logs for kubelet ...
	I0610 03:42:01.090183    7510 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:42:01.124115    7510 logs.go:123] Gathering logs for coredns [b082e39dd8f4] ...
	I0610 03:42:01.124122    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b082e39dd8f4"
	I0610 03:42:01.134942    7510 logs.go:123] Gathering logs for kube-controller-manager [4408982567d4] ...
	I0610 03:42:01.134956    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4408982567d4"
	I0610 03:42:01.155105    7510 logs.go:123] Gathering logs for kube-apiserver [824f9b6a1778] ...
	I0610 03:42:01.155116    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 824f9b6a1778"
	I0610 03:42:01.168749    7510 logs.go:123] Gathering logs for kube-scheduler [3f33f1ff9491] ...
	I0610 03:42:01.168759    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f33f1ff9491"
	I0610 03:42:01.183054    7510 logs.go:123] Gathering logs for etcd [e28cbda74696] ...
	I0610 03:42:01.183068    7510 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e28cbda74696"
	I0610 03:41:59.662276    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:59.662318    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:42:03.699093    7510 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:42:04.663143    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:42:04.663196    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:42:08.701366    7510 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:42:08.705736    7510 out.go:177] 
	W0610 03:42:08.709918    7510 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0610 03:42:08.709935    7510 out.go:239] * 
	W0610 03:42:08.710682    7510 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:42:08.721822    7510 out.go:177] 
	I0610 03:42:09.664128    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:42:09.664188    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0610 03:42:10.001760    7676 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0610 03:42:10.006153    7676 out.go:177] * Enabled addons: storage-provisioner
	I0610 03:42:10.015960    7676 addons.go:510] duration metric: took 30.473402125s for enable addons: enabled=[storage-provisioner]
	I0610 03:42:14.665414    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:42:14.665501    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:42:19.666216    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:42:19.666259    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-06-10 10:33:15 UTC, ends at Mon 2024-06-10 10:42:24 UTC. --
	Jun 10 10:42:03 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:03Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jun 10 10:42:08 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:08Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jun 10 10:42:09 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:09Z" level=error msg="ContainerStats resp: {0x4000a01640 linux}"
	Jun 10 10:42:09 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:09Z" level=error msg="ContainerStats resp: {0x4000a01780 linux}"
	Jun 10 10:42:10 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:10Z" level=error msg="ContainerStats resp: {0x40007b7400 linux}"
	Jun 10 10:42:11 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:11Z" level=error msg="ContainerStats resp: {0x40000b7d80 linux}"
	Jun 10 10:42:11 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:11Z" level=error msg="ContainerStats resp: {0x400007e480 linux}"
	Jun 10 10:42:11 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:11Z" level=error msg="ContainerStats resp: {0x400007e5c0 linux}"
	Jun 10 10:42:11 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:11Z" level=error msg="ContainerStats resp: {0x40003587c0 linux}"
	Jun 10 10:42:11 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:11Z" level=error msg="ContainerStats resp: {0x40008ced40 linux}"
	Jun 10 10:42:11 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:11Z" level=error msg="ContainerStats resp: {0x40008cf300 linux}"
	Jun 10 10:42:11 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:11Z" level=error msg="ContainerStats resp: {0x40008ae280 linux}"
	Jun 10 10:42:13 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:13Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jun 10 10:42:18 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:18Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jun 10 10:42:21 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:21Z" level=error msg="ContainerStats resp: {0x400055f380 linux}"
	Jun 10 10:42:21 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:21Z" level=error msg="ContainerStats resp: {0x400055fac0 linux}"
	Jun 10 10:42:22 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:22Z" level=error msg="ContainerStats resp: {0x40008bc4c0 linux}"
	Jun 10 10:42:23 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:23Z" level=error msg="ContainerStats resp: {0x40008bd180 linux}"
	Jun 10 10:42:23 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:23Z" level=error msg="ContainerStats resp: {0x40008bd5c0 linux}"
	Jun 10 10:42:23 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:23Z" level=error msg="ContainerStats resp: {0x40008bdb40 linux}"
	Jun 10 10:42:23 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:23Z" level=error msg="ContainerStats resp: {0x40008bdf40 linux}"
	Jun 10 10:42:23 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:23Z" level=error msg="ContainerStats resp: {0x40007b64c0 linux}"
	Jun 10 10:42:23 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:23Z" level=error msg="ContainerStats resp: {0x4000a00840 linux}"
	Jun 10 10:42:23 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:23Z" level=error msg="ContainerStats resp: {0x40007b7240 linux}"
	Jun 10 10:42:23 running-upgrade-479000 cri-dockerd[3027]: time="2024-06-10T10:42:23Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	b082e39dd8f49       edaa71f2aee88       25 seconds ago      Running             coredns                   2                   afde7da3d330f
	9a2bad93756d6       edaa71f2aee88       26 seconds ago      Running             coredns                   2                   d1eab8bec684e
	813ec2d6967d2       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   afde7da3d330f
	dc4fe8f226c81       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   d1eab8bec684e
	6cef1607a60fb       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   81bfb7eb9cdf9
	8d03b3991df87       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   cecf80262a65e
	e28cbda746964       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   4faaa0cc947bd
	824f9b6a17781       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   8d7b5ea062c2d
	3f33f1ff9491a       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   a4c7dd38dfeff
	4408982567d4e       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   1d033aee4005b
	
	
	==> coredns [813ec2d6967d] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5144899410058074283.3026062621719237917. HINFO: read udp 10.244.0.2:34288->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5144899410058074283.3026062621719237917. HINFO: read udp 10.244.0.2:40981->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5144899410058074283.3026062621719237917. HINFO: read udp 10.244.0.2:48373->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5144899410058074283.3026062621719237917. HINFO: read udp 10.244.0.2:59779->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5144899410058074283.3026062621719237917. HINFO: read udp 10.244.0.2:58455->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5144899410058074283.3026062621719237917. HINFO: read udp 10.244.0.2:48513->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5144899410058074283.3026062621719237917. HINFO: read udp 10.244.0.2:57995->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5144899410058074283.3026062621719237917. HINFO: read udp 10.244.0.2:42366->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5144899410058074283.3026062621719237917. HINFO: read udp 10.244.0.2:51695->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5144899410058074283.3026062621719237917. HINFO: read udp 10.244.0.2:33298->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9a2bad93756d] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 9043506554003781869.7987401925109724767. HINFO: read udp 10.244.0.3:45213->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9043506554003781869.7987401925109724767. HINFO: read udp 10.244.0.3:55605->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9043506554003781869.7987401925109724767. HINFO: read udp 10.244.0.3:34459->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9043506554003781869.7987401925109724767. HINFO: read udp 10.244.0.3:35752->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9043506554003781869.7987401925109724767. HINFO: read udp 10.244.0.3:44825->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9043506554003781869.7987401925109724767. HINFO: read udp 10.244.0.3:44000->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9043506554003781869.7987401925109724767. HINFO: read udp 10.244.0.3:54097->10.0.2.3:53: i/o timeout
	
	
	==> coredns [b082e39dd8f4] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4492409089535057097.5702701261753102483. HINFO: read udp 10.244.0.2:34038->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4492409089535057097.5702701261753102483. HINFO: read udp 10.244.0.2:34062->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4492409089535057097.5702701261753102483. HINFO: read udp 10.244.0.2:59940->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4492409089535057097.5702701261753102483. HINFO: read udp 10.244.0.2:57921->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4492409089535057097.5702701261753102483. HINFO: read udp 10.244.0.2:43820->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4492409089535057097.5702701261753102483. HINFO: read udp 10.244.0.2:45660->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4492409089535057097.5702701261753102483. HINFO: read udp 10.244.0.2:58634->10.0.2.3:53: i/o timeout
	
	
	==> coredns [dc4fe8f226c8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 790714747223013848.2330844267646836787. HINFO: read udp 10.244.0.3:58237->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 790714747223013848.2330844267646836787. HINFO: read udp 10.244.0.3:50260->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 790714747223013848.2330844267646836787. HINFO: read udp 10.244.0.3:44800->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 790714747223013848.2330844267646836787. HINFO: read udp 10.244.0.3:42186->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 790714747223013848.2330844267646836787. HINFO: read udp 10.244.0.3:42375->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 790714747223013848.2330844267646836787. HINFO: read udp 10.244.0.3:35547->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 790714747223013848.2330844267646836787. HINFO: read udp 10.244.0.3:54824->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 790714747223013848.2330844267646836787. HINFO: read udp 10.244.0.3:54322->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 790714747223013848.2330844267646836787. HINFO: read udp 10.244.0.3:49545->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 790714747223013848.2330844267646836787. HINFO: read udp 10.244.0.3:39952->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-479000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-479000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=running-upgrade-479000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T03_38_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:38:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-479000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 10:42:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 10:38:07 +0000   Mon, 10 Jun 2024 10:38:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 10:38:07 +0000   Mon, 10 Jun 2024 10:38:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 10:38:07 +0000   Mon, 10 Jun 2024 10:38:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 10:38:07 +0000   Mon, 10 Jun 2024 10:38:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-479000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 85bdd196122a4ee7aea5ca394375f39e
	  System UUID:                85bdd196122a4ee7aea5ca394375f39e
	  Boot ID:                    fc47d7a0-640a-450c-8992-82a40f7a4d3e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-2vrv4                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-7svjk                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-479000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-479000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-479000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-proxy-s669k                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-479000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m18s  kubelet          Node running-upgrade-479000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m18s  kubelet          Node running-upgrade-479000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s  kubelet          Node running-upgrade-479000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s  kubelet          Node running-upgrade-479000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m18s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-479000 event: Registered Node running-upgrade-479000 in Controller
	
	
	==> dmesg <==
	[  +1.703021] systemd-fstab-generator[877]: Ignoring "noauto" for root device
	[  +0.081340] systemd-fstab-generator[888]: Ignoring "noauto" for root device
	[  +0.084195] systemd-fstab-generator[899]: Ignoring "noauto" for root device
	[  +1.136338] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.082873] systemd-fstab-generator[1049]: Ignoring "noauto" for root device
	[  +0.084801] systemd-fstab-generator[1060]: Ignoring "noauto" for root device
	[  +2.776195] systemd-fstab-generator[1286]: Ignoring "noauto" for root device
	[  +9.637958] systemd-fstab-generator[1919]: Ignoring "noauto" for root device
	[  +2.970784] systemd-fstab-generator[2195]: Ignoring "noauto" for root device
	[  +0.149010] systemd-fstab-generator[2228]: Ignoring "noauto" for root device
	[  +0.099056] systemd-fstab-generator[2239]: Ignoring "noauto" for root device
	[  +0.099028] systemd-fstab-generator[2252]: Ignoring "noauto" for root device
	[  +3.258202] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.219133] systemd-fstab-generator[2981]: Ignoring "noauto" for root device
	[  +0.077490] systemd-fstab-generator[2995]: Ignoring "noauto" for root device
	[  +0.080708] systemd-fstab-generator[3006]: Ignoring "noauto" for root device
	[  +0.098761] systemd-fstab-generator[3020]: Ignoring "noauto" for root device
	[  +2.123623] systemd-fstab-generator[3173]: Ignoring "noauto" for root device
	[  +4.162639] systemd-fstab-generator[3591]: Ignoring "noauto" for root device
	[  +1.701511] systemd-fstab-generator[3872]: Ignoring "noauto" for root device
	[Jun10 10:34] kauditd_printk_skb: 68 callbacks suppressed
	[Jun10 10:37] kauditd_printk_skb: 23 callbacks suppressed
	[Jun10 10:38] systemd-fstab-generator[11902]: Ignoring "noauto" for root device
	[  +6.167887] systemd-fstab-generator[12498]: Ignoring "noauto" for root device
	[  +0.452671] systemd-fstab-generator[12630]: Ignoring "noauto" for root device
	
	
	==> etcd [e28cbda74696] <==
	{"level":"info","ts":"2024-06-10T10:38:03.055Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-10T10:38:03.056Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-10T10:38:03.053Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-06-10T10:38:03.056Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-06-10T10:38:03.055Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-06-10T10:38:03.057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-06-10T10:38:03.059Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-06-10T10:38:03.917Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-10T10:38:03.917Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-10T10:38:03.917Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-06-10T10:38:03.917Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-06-10T10:38:03.917Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-06-10T10:38:03.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-06-10T10:38:03.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-06-10T10:38:03.918Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T10:38:03.918Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T10:38:03.919Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T10:38:03.919Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T10:38:03.919Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-479000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-10T10:38:03.919Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T10:38:03.919Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-10T10:38:03.919Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-10T10:38:03.919Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T10:38:03.921Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-06-10T10:38:03.921Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:42:25 up 9 min,  0 users,  load average: 0.44, 0.30, 0.16
	Linux running-upgrade-479000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [824f9b6a1778] <==
	I0610 10:38:05.198013       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 10:38:05.199107       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0610 10:38:05.199138       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0610 10:38:05.199160       1 cache.go:39] Caches are synced for autoregister controller
	I0610 10:38:05.201070       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 10:38:05.210231       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0610 10:38:05.220759       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0610 10:38:05.938066       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0610 10:38:06.104421       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0610 10:38:06.107507       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0610 10:38:06.107622       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 10:38:06.239690       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 10:38:06.254137       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 10:38:06.366092       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0610 10:38:06.368382       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0610 10:38:06.368814       1 controller.go:611] quota admission added evaluator for: endpoints
	I0610 10:38:06.370074       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 10:38:07.229006       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0610 10:38:07.625589       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0610 10:38:07.628456       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0610 10:38:07.632973       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0610 10:38:07.712267       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 10:38:20.407695       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0610 10:38:21.007905       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0610 10:38:21.550984       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [4408982567d4] <==
	I0610 10:38:20.255722       1 shared_informer.go:262] Caches are synced for attach detach
	I0610 10:38:20.255784       1 shared_informer.go:262] Caches are synced for daemon sets
	I0610 10:38:20.255843       1 shared_informer.go:262] Caches are synced for TTL
	I0610 10:38:20.268262       1 shared_informer.go:262] Caches are synced for node
	I0610 10:38:20.268350       1 range_allocator.go:173] Starting range CIDR allocator
	I0610 10:38:20.268358       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0610 10:38:20.268399       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0610 10:38:20.270542       1 range_allocator.go:374] Set node running-upgrade-479000 PodCIDR to [10.244.0.0/24]
	I0610 10:38:20.296082       1 shared_informer.go:262] Caches are synced for persistent volume
	I0610 10:38:20.306738       1 shared_informer.go:262] Caches are synced for taint
	I0610 10:38:20.306785       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0610 10:38:20.306811       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-479000. Assuming now as a timestamp.
	I0610 10:38:20.306828       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0610 10:38:20.306916       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0610 10:38:20.306949       1 event.go:294] "Event occurred" object="running-upgrade-479000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-479000 event: Registered Node running-upgrade-479000 in Controller"
	I0610 10:38:20.307960       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0610 10:38:20.309676       1 shared_informer.go:262] Caches are synced for resource quota
	I0610 10:38:20.312827       1 shared_informer.go:262] Caches are synced for resource quota
	I0610 10:38:20.409290       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0610 10:38:20.727019       1 shared_informer.go:262] Caches are synced for garbage collector
	I0610 10:38:20.759334       1 shared_informer.go:262] Caches are synced for garbage collector
	I0610 10:38:20.759342       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0610 10:38:21.010815       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-s669k"
	I0610 10:38:21.109199       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-7svjk"
	I0610 10:38:21.111747       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-2vrv4"
	
	
	==> kube-proxy [6cef1607a60f] <==
	I0610 10:38:21.524959       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0610 10:38:21.525002       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0610 10:38:21.525013       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0610 10:38:21.548695       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0610 10:38:21.548708       1 server_others.go:206] "Using iptables Proxier"
	I0610 10:38:21.548723       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0610 10:38:21.548839       1 server.go:661] "Version info" version="v1.24.1"
	I0610 10:38:21.548850       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 10:38:21.549176       1 config.go:444] "Starting node config controller"
	I0610 10:38:21.549185       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0610 10:38:21.549192       1 config.go:317] "Starting service config controller"
	I0610 10:38:21.549193       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0610 10:38:21.549198       1 config.go:226] "Starting endpoint slice config controller"
	I0610 10:38:21.549200       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0610 10:38:21.649739       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0610 10:38:21.649764       1 shared_informer.go:262] Caches are synced for node config
	I0610 10:38:21.649768       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [3f33f1ff9491] <==
	W0610 10:38:05.158727       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 10:38:05.159273       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 10:38:05.158741       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 10:38:05.159281       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0610 10:38:05.158752       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0610 10:38:05.159285       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0610 10:38:05.158779       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 10:38:05.159351       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 10:38:05.158794       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0610 10:38:05.159360       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0610 10:38:05.158809       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 10:38:05.159364       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 10:38:05.157993       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 10:38:05.159406       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 10:38:05.159691       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 10:38:05.159718       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 10:38:06.067353       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 10:38:06.067393       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 10:38:06.127606       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 10:38:06.127639       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 10:38:06.128012       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 10:38:06.128046       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 10:38:06.196591       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 10:38:06.196683       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 10:38:08.959527       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-06-10 10:33:15 UTC, ends at Mon 2024-06-10 10:42:25 UTC. --
	Jun 10 10:38:09 running-upgrade-479000 kubelet[12504]: E0610 10:38:09.457570   12504 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-479000\" already exists" pod="kube-system/etcd-running-upgrade-479000"
	Jun 10 10:38:09 running-upgrade-479000 kubelet[12504]: E0610 10:38:09.659302   12504 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-479000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-479000"
	Jun 10 10:38:09 running-upgrade-479000 kubelet[12504]: I0610 10:38:09.704267   12504 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/89c47605-d27b-4b96-afa6-a2bd171dc007/volumes"
	Jun 10 10:38:09 running-upgrade-479000 kubelet[12504]: I0610 10:38:09.704288   12504 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/6bca1215-47cf-47b4-b771-4a1bb082d640/volumes"
	Jun 10 10:38:09 running-upgrade-479000 kubelet[12504]: I0610 10:38:09.704298   12504 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/f31cda8f-53f1-4326-9f35-663f41bf7d82/volumes"
	Jun 10 10:38:09 running-upgrade-479000 kubelet[12504]: I0610 10:38:09.855871   12504 request.go:601] Waited for 1.111957597s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jun 10 10:38:09 running-upgrade-479000 kubelet[12504]: E0610 10:38:09.859270   12504 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-479000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-479000"
	Jun 10 10:38:20 running-upgrade-479000 kubelet[12504]: I0610 10:38:20.286815   12504 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 10 10:38:20 running-upgrade-479000 kubelet[12504]: I0610 10:38:20.287181   12504 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 10 10:38:20 running-upgrade-479000 kubelet[12504]: I0610 10:38:20.311781   12504 topology_manager.go:200] "Topology Admit Handler"
	Jun 10 10:38:20 running-upgrade-479000 kubelet[12504]: I0610 10:38:20.493648   12504 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wvsk\" (UniqueName: \"kubernetes.io/projected/54c0f5eb-ead3-43a1-b96e-858b68f94a75-kube-api-access-8wvsk\") pod \"storage-provisioner\" (UID: \"54c0f5eb-ead3-43a1-b96e-858b68f94a75\") " pod="kube-system/storage-provisioner"
	Jun 10 10:38:20 running-upgrade-479000 kubelet[12504]: I0610 10:38:20.493738   12504 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/54c0f5eb-ead3-43a1-b96e-858b68f94a75-tmp\") pod \"storage-provisioner\" (UID: \"54c0f5eb-ead3-43a1-b96e-858b68f94a75\") " pod="kube-system/storage-provisioner"
	Jun 10 10:38:21 running-upgrade-479000 kubelet[12504]: I0610 10:38:21.013178   12504 topology_manager.go:200] "Topology Admit Handler"
	Jun 10 10:38:21 running-upgrade-479000 kubelet[12504]: I0610 10:38:21.113830   12504 topology_manager.go:200] "Topology Admit Handler"
	Jun 10 10:38:21 running-upgrade-479000 kubelet[12504]: I0610 10:38:21.117272   12504 topology_manager.go:200] "Topology Admit Handler"
	Jun 10 10:38:21 running-upgrade-479000 kubelet[12504]: I0610 10:38:21.198552   12504 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d65daeae-ea15-4b84-8d13-79decccfdb7c-xtables-lock\") pod \"kube-proxy-s669k\" (UID: \"d65daeae-ea15-4b84-8d13-79decccfdb7c\") " pod="kube-system/kube-proxy-s669k"
	Jun 10 10:38:21 running-upgrade-479000 kubelet[12504]: I0610 10:38:21.198587   12504 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d65daeae-ea15-4b84-8d13-79decccfdb7c-lib-modules\") pod \"kube-proxy-s669k\" (UID: \"d65daeae-ea15-4b84-8d13-79decccfdb7c\") " pod="kube-system/kube-proxy-s669k"
	Jun 10 10:38:21 running-upgrade-479000 kubelet[12504]: I0610 10:38:21.198615   12504 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d65daeae-ea15-4b84-8d13-79decccfdb7c-kube-proxy\") pod \"kube-proxy-s669k\" (UID: \"d65daeae-ea15-4b84-8d13-79decccfdb7c\") " pod="kube-system/kube-proxy-s669k"
	Jun 10 10:38:21 running-upgrade-479000 kubelet[12504]: I0610 10:38:21.198628   12504 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qmc9\" (UniqueName: \"kubernetes.io/projected/d65daeae-ea15-4b84-8d13-79decccfdb7c-kube-api-access-8qmc9\") pod \"kube-proxy-s669k\" (UID: \"d65daeae-ea15-4b84-8d13-79decccfdb7c\") " pod="kube-system/kube-proxy-s669k"
	Jun 10 10:38:21 running-upgrade-479000 kubelet[12504]: I0610 10:38:21.299253   12504 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfd2v\" (UniqueName: \"kubernetes.io/projected/2e69f8b0-d3a3-4901-a5dd-5d190b141c2f-kube-api-access-qfd2v\") pod \"coredns-6d4b75cb6d-7svjk\" (UID: \"2e69f8b0-d3a3-4901-a5dd-5d190b141c2f\") " pod="kube-system/coredns-6d4b75cb6d-7svjk"
	Jun 10 10:38:21 running-upgrade-479000 kubelet[12504]: I0610 10:38:21.299874   12504 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad3f8deb-d185-4607-bd95-a3de6855c1ed-config-volume\") pod \"coredns-6d4b75cb6d-2vrv4\" (UID: \"ad3f8deb-d185-4607-bd95-a3de6855c1ed\") " pod="kube-system/coredns-6d4b75cb6d-2vrv4"
	Jun 10 10:38:21 running-upgrade-479000 kubelet[12504]: I0610 10:38:21.299908   12504 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jmjq\" (UniqueName: \"kubernetes.io/projected/ad3f8deb-d185-4607-bd95-a3de6855c1ed-kube-api-access-4jmjq\") pod \"coredns-6d4b75cb6d-2vrv4\" (UID: \"ad3f8deb-d185-4607-bd95-a3de6855c1ed\") " pod="kube-system/coredns-6d4b75cb6d-2vrv4"
	Jun 10 10:38:21 running-upgrade-479000 kubelet[12504]: I0610 10:38:21.299928   12504 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e69f8b0-d3a3-4901-a5dd-5d190b141c2f-config-volume\") pod \"coredns-6d4b75cb6d-7svjk\" (UID: \"2e69f8b0-d3a3-4901-a5dd-5d190b141c2f\") " pod="kube-system/coredns-6d4b75cb6d-7svjk"
	Jun 10 10:41:59 running-upgrade-479000 kubelet[12504]: I0610 10:41:59.203339   12504 scope.go:110] "RemoveContainer" containerID="89f9105df8f8fc0067947ba517b767c756dd7e7be599e3a44aa4e8f005df7700"
	Jun 10 10:42:00 running-upgrade-479000 kubelet[12504]: I0610 10:42:00.223313   12504 scope.go:110] "RemoveContainer" containerID="5fc29143e0754cb89e9c84b4093fcae072ade59fe7e698dac346df4404b9aa8c"
	
	
	==> storage-provisioner [8d03b3991df8] <==
	I0610 10:38:20.823474       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 10:38:20.827205       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 10:38:20.827262       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 10:38:20.832421       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 10:38:20.832632       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-479000_9170d851-5a50-4e75-9d26-e4cc65be9e20!
	I0610 10:38:20.832814       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ce02dbd7-37ab-4757-8a73-a1fefea91391", APIVersion:"v1", ResourceVersion:"331", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-479000_9170d851-5a50-4e75-9d26-e4cc65be9e20 became leader
	I0610 10:38:20.933578       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-479000_9170d851-5a50-4e75-9d26-e4cc65be9e20!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-479000 -n running-upgrade-479000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-479000 -n running-upgrade-479000: exit status 2 (15.702856166s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-479000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-479000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-479000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-479000: (1.072549583s)
--- FAIL: TestRunningBinaryUpgrade (599.81s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.29s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-122000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-122000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.770673209s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-122000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-122000" primary control-plane node in "kubernetes-upgrade-122000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-122000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:35:42.371726    7592 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:35:42.371889    7592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:35:42.371893    7592 out.go:304] Setting ErrFile to fd 2...
	I0610 03:35:42.371895    7592 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:35:42.372061    7592 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:35:42.373340    7592 out.go:298] Setting JSON to false
	I0610 03:35:42.390126    7592 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5713,"bootTime":1718010029,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:35:42.390193    7592 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:35:42.395269    7592 out.go:177] * [kubernetes-upgrade-122000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:35:42.402151    7592 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:35:42.402223    7592 notify.go:220] Checking for updates...
	I0610 03:35:42.408244    7592 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:35:42.411178    7592 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:35:42.414213    7592 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:35:42.417270    7592 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:35:42.418565    7592 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:35:42.421527    7592 config.go:182] Loaded profile config "multinode-763000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:35:42.421604    7592 config.go:182] Loaded profile config "running-upgrade-479000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 03:35:42.421659    7592 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:35:42.426246    7592 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:35:42.431260    7592 start.go:297] selected driver: qemu2
	I0610 03:35:42.431265    7592 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:35:42.431271    7592 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:35:42.433534    7592 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:35:42.436204    7592 out.go:177] * Automatically selected the socket_vmnet network
	I0610 03:35:42.439345    7592 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 03:35:42.439403    7592 cni.go:84] Creating CNI manager for ""
	I0610 03:35:42.439410    7592 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0610 03:35:42.439437    7592 start.go:340] cluster config:
	{Name:kubernetes-upgrade-122000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:35:42.443985    7592 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:35:42.451245    7592 out.go:177] * Starting "kubernetes-upgrade-122000" primary control-plane node in "kubernetes-upgrade-122000" cluster
	I0610 03:35:42.455201    7592 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 03:35:42.455217    7592 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0610 03:35:42.455227    7592 cache.go:56] Caching tarball of preloaded images
	I0610 03:35:42.455291    7592 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:35:42.455299    7592 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0610 03:35:42.455372    7592 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/kubernetes-upgrade-122000/config.json ...
	I0610 03:35:42.455387    7592 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/kubernetes-upgrade-122000/config.json: {Name:mk56e6bf08e72ac76713956edb46c5678bfe3828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:35:42.455736    7592 start.go:360] acquireMachinesLock for kubernetes-upgrade-122000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:35:42.455769    7592 start.go:364] duration metric: took 27.25µs to acquireMachinesLock for "kubernetes-upgrade-122000"
	I0610 03:35:42.455780    7592 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:35:42.455803    7592 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:35:42.460246    7592 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 03:35:42.477295    7592 start.go:159] libmachine.API.Create for "kubernetes-upgrade-122000" (driver="qemu2")
	I0610 03:35:42.477323    7592 client.go:168] LocalClient.Create starting
	I0610 03:35:42.477400    7592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:35:42.477436    7592 main.go:141] libmachine: Decoding PEM data...
	I0610 03:35:42.477446    7592 main.go:141] libmachine: Parsing certificate...
	I0610 03:35:42.477496    7592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:35:42.477519    7592 main.go:141] libmachine: Decoding PEM data...
	I0610 03:35:42.477526    7592 main.go:141] libmachine: Parsing certificate...
	I0610 03:35:42.477907    7592 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:35:42.623353    7592 main.go:141] libmachine: Creating SSH key...
	I0610 03:35:42.686817    7592 main.go:141] libmachine: Creating Disk image...
	I0610 03:35:42.686823    7592 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:35:42.686984    7592 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/disk.qcow2
	I0610 03:35:42.699749    7592 main.go:141] libmachine: STDOUT: 
	I0610 03:35:42.699773    7592 main.go:141] libmachine: STDERR: 
	I0610 03:35:42.699833    7592 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/disk.qcow2 +20000M
	I0610 03:35:42.710884    7592 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:35:42.710906    7592 main.go:141] libmachine: STDERR: 
	I0610 03:35:42.710921    7592 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/disk.qcow2
	I0610 03:35:42.710933    7592 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:35:42.710967    7592 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:e5:c1:f8:8c:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/disk.qcow2
	I0610 03:35:42.712652    7592 main.go:141] libmachine: STDOUT: 
	I0610 03:35:42.712669    7592 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:35:42.712690    7592 client.go:171] duration metric: took 235.366167ms to LocalClient.Create
	I0610 03:35:44.715055    7592 start.go:128] duration metric: took 2.259254917s to createHost
	I0610 03:35:44.715196    7592 start.go:83] releasing machines lock for "kubernetes-upgrade-122000", held for 2.259452792s
	W0610 03:35:44.715258    7592 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:35:44.722563    7592 out.go:177] * Deleting "kubernetes-upgrade-122000" in qemu2 ...
	W0610 03:35:44.753330    7592 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:35:44.753372    7592 start.go:728] Will try again in 5 seconds ...
	I0610 03:35:49.754234    7592 start.go:360] acquireMachinesLock for kubernetes-upgrade-122000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:35:49.754743    7592 start.go:364] duration metric: took 407.959µs to acquireMachinesLock for "kubernetes-upgrade-122000"
	I0610 03:35:49.754898    7592 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:35:49.755209    7592 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:35:49.765849    7592 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 03:35:49.816123    7592 start.go:159] libmachine.API.Create for "kubernetes-upgrade-122000" (driver="qemu2")
	I0610 03:35:49.816178    7592 client.go:168] LocalClient.Create starting
	I0610 03:35:49.816293    7592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:35:49.816371    7592 main.go:141] libmachine: Decoding PEM data...
	I0610 03:35:49.816388    7592 main.go:141] libmachine: Parsing certificate...
	I0610 03:35:49.816465    7592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:35:49.816509    7592 main.go:141] libmachine: Decoding PEM data...
	I0610 03:35:49.816522    7592 main.go:141] libmachine: Parsing certificate...
	I0610 03:35:49.817295    7592 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:35:49.970639    7592 main.go:141] libmachine: Creating SSH key...
	I0610 03:35:50.035006    7592 main.go:141] libmachine: Creating Disk image...
	I0610 03:35:50.035012    7592 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:35:50.035201    7592 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/disk.qcow2
	I0610 03:35:50.048352    7592 main.go:141] libmachine: STDOUT: 
	I0610 03:35:50.048386    7592 main.go:141] libmachine: STDERR: 
	I0610 03:35:50.048444    7592 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/disk.qcow2 +20000M
	I0610 03:35:50.059580    7592 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:35:50.059605    7592 main.go:141] libmachine: STDERR: 
	I0610 03:35:50.059620    7592 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/disk.qcow2
	I0610 03:35:50.059625    7592 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:35:50.059666    7592 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:0b:b1:5f:d7:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/disk.qcow2
	I0610 03:35:50.061412    7592 main.go:141] libmachine: STDOUT: 
	I0610 03:35:50.061429    7592 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:35:50.061442    7592 client.go:171] duration metric: took 245.262042ms to LocalClient.Create
	I0610 03:35:52.063523    7592 start.go:128] duration metric: took 2.30832775s to createHost
	I0610 03:35:52.063556    7592 start.go:83] releasing machines lock for "kubernetes-upgrade-122000", held for 2.308824667s
	W0610 03:35:52.063751    7592 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-122000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:35:52.082033    7592 out.go:177] 
	W0610 03:35:52.086024    7592 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:35:52.086051    7592 out.go:239] * 
	* 
	W0610 03:35:52.087379    7592 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:35:52.098923    7592 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-122000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-122000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-122000: (3.070818917s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-122000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-122000 status --format={{.Host}}: exit status 7 (64.507583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-122000 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-122000 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.182460209s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-122000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-122000" primary control-plane node in "kubernetes-upgrade-122000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-122000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-122000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:35:55.280963    7630 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:35:55.281086    7630 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:35:55.281090    7630 out.go:304] Setting ErrFile to fd 2...
	I0610 03:35:55.281093    7630 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:35:55.281237    7630 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:35:55.282428    7630 out.go:298] Setting JSON to false
	I0610 03:35:55.299221    7630 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5726,"bootTime":1718010029,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:35:55.299301    7630 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:35:55.303864    7630 out.go:177] * [kubernetes-upgrade-122000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:35:55.311706    7630 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:35:55.313075    7630 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:35:55.311745    7630 notify.go:220] Checking for updates...
	I0610 03:35:55.318724    7630 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:35:55.321823    7630 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:35:55.324701    7630 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:35:55.327680    7630 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:35:55.331043    7630 config.go:182] Loaded profile config "kubernetes-upgrade-122000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0610 03:35:55.331294    7630 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:35:55.335756    7630 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 03:35:55.342744    7630 start.go:297] selected driver: qemu2
	I0610 03:35:55.342750    7630 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-122000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:35:55.342824    7630 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:35:55.345079    7630 cni.go:84] Creating CNI manager for ""
	I0610 03:35:55.345095    7630 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:35:55.345126    7630 start.go:340] cluster config:
	{Name:kubernetes-upgrade-122000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-122000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:35:55.349353    7630 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:35:55.356732    7630 out.go:177] * Starting "kubernetes-upgrade-122000" primary control-plane node in "kubernetes-upgrade-122000" cluster
	I0610 03:35:55.359743    7630 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:35:55.359758    7630 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:35:55.359769    7630 cache.go:56] Caching tarball of preloaded images
	I0610 03:35:55.359835    7630 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:35:55.359840    7630 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:35:55.359895    7630 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/kubernetes-upgrade-122000/config.json ...
	I0610 03:35:55.360353    7630 start.go:360] acquireMachinesLock for kubernetes-upgrade-122000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:35:55.360378    7630 start.go:364] duration metric: took 19.958µs to acquireMachinesLock for "kubernetes-upgrade-122000"
	I0610 03:35:55.360392    7630 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:35:55.360397    7630 fix.go:54] fixHost starting: 
	I0610 03:35:55.360508    7630 fix.go:112] recreateIfNeeded on kubernetes-upgrade-122000: state=Stopped err=<nil>
	W0610 03:35:55.360518    7630 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:35:55.368595    7630 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-122000" ...
	I0610 03:35:55.372810    7630 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:0b:b1:5f:d7:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/disk.qcow2
	I0610 03:35:55.374743    7630 main.go:141] libmachine: STDOUT: 
	I0610 03:35:55.374760    7630 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:35:55.374794    7630 fix.go:56] duration metric: took 14.396917ms for fixHost
	I0610 03:35:55.374798    7630 start.go:83] releasing machines lock for "kubernetes-upgrade-122000", held for 14.416625ms
	W0610 03:35:55.374805    7630 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:35:55.374848    7630 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:35:55.374853    7630 start.go:728] Will try again in 5 seconds ...
	I0610 03:36:00.375132    7630 start.go:360] acquireMachinesLock for kubernetes-upgrade-122000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:36:00.375694    7630 start.go:364] duration metric: took 415.25µs to acquireMachinesLock for "kubernetes-upgrade-122000"
	I0610 03:36:00.375853    7630 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:36:00.375873    7630 fix.go:54] fixHost starting: 
	I0610 03:36:00.376596    7630 fix.go:112] recreateIfNeeded on kubernetes-upgrade-122000: state=Stopped err=<nil>
	W0610 03:36:00.376625    7630 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:36:00.384111    7630 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-122000" ...
	I0610 03:36:00.388229    7630 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:0b:b1:5f:d7:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubernetes-upgrade-122000/disk.qcow2
	I0610 03:36:00.398290    7630 main.go:141] libmachine: STDOUT: 
	I0610 03:36:00.398345    7630 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:36:00.398457    7630 fix.go:56] duration metric: took 22.585084ms for fixHost
	I0610 03:36:00.398473    7630 start.go:83] releasing machines lock for "kubernetes-upgrade-122000", held for 22.756792ms
	W0610 03:36:00.398620    7630 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-122000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-122000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:36:00.406976    7630 out.go:177] 
	W0610 03:36:00.410041    7630 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:36:00.410075    7630 out.go:239] * 
	* 
	W0610 03:36:00.411509    7630 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:36:00.419940    7630 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-122000 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-122000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-122000 version --output=json: exit status 1 (62.62775ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-122000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-06-10 03:36:00.497322 -0700 PDT m=+981.137966042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-122000 -n kubernetes-upgrade-122000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-122000 -n kubernetes-upgrade-122000: exit status 7 (32.119042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-122000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-122000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-122000
--- FAIL: TestKubernetesUpgrade (18.29s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.01s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19046
- KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3917490554/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.01s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19046
- KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1573119501/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (577.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1898606673 start -p stopped-upgrade-390000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1898606673 start -p stopped-upgrade-390000 --memory=2200 --vm-driver=qemu2 : (43.161247625s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1898606673 -p stopped-upgrade-390000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1898606673 -p stopped-upgrade-390000 stop: (12.111089125s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-390000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-390000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.265365125s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-390000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-390000" primary control-plane node in "stopped-upgrade-390000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-390000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:36:58.351923    7676 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:36:58.352133    7676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:36:58.352137    7676 out.go:304] Setting ErrFile to fd 2...
	I0610 03:36:58.352140    7676 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:36:58.352335    7676 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:36:58.353551    7676 out.go:298] Setting JSON to false
	I0610 03:36:58.373084    7676 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5789,"bootTime":1718010029,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:36:58.373183    7676 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:36:58.378199    7676 out.go:177] * [stopped-upgrade-390000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:36:58.386112    7676 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:36:58.386153    7676 notify.go:220] Checking for updates...
	I0610 03:36:58.393032    7676 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:36:58.396165    7676 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:36:58.399098    7676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:36:58.402029    7676 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:36:58.405094    7676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:36:58.408233    7676 config.go:182] Loaded profile config "stopped-upgrade-390000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 03:36:58.411072    7676 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0610 03:36:58.414055    7676 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:36:58.417069    7676 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 03:36:58.424079    7676 start.go:297] selected driver: qemu2
	I0610 03:36:58.424087    7676 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-390000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51320 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-390000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0610 03:36:58.424157    7676 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:36:58.426650    7676 cni.go:84] Creating CNI manager for ""
	I0610 03:36:58.426673    7676 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:36:58.426704    7676 start.go:340] cluster config:
	{Name:stopped-upgrade-390000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51320 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-390000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0610 03:36:58.426758    7676 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:36:58.434077    7676 out.go:177] * Starting "stopped-upgrade-390000" primary control-plane node in "stopped-upgrade-390000" cluster
	I0610 03:36:58.438145    7676 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0610 03:36:58.438161    7676 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0610 03:36:58.438172    7676 cache.go:56] Caching tarball of preloaded images
	I0610 03:36:58.438238    7676 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:36:58.438245    7676 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0610 03:36:58.438318    7676 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/config.json ...
	I0610 03:36:58.438828    7676 start.go:360] acquireMachinesLock for stopped-upgrade-390000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:36:58.438866    7676 start.go:364] duration metric: took 31.875µs to acquireMachinesLock for "stopped-upgrade-390000"
	I0610 03:36:58.438875    7676 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:36:58.438882    7676 fix.go:54] fixHost starting: 
	I0610 03:36:58.439011    7676 fix.go:112] recreateIfNeeded on stopped-upgrade-390000: state=Stopped err=<nil>
	W0610 03:36:58.439020    7676 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:36:58.446066    7676 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-390000" ...
	I0610 03:36:58.450130    7676 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/stopped-upgrade-390000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/stopped-upgrade-390000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/stopped-upgrade-390000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51286-:22,hostfwd=tcp::51287-:2376,hostname=stopped-upgrade-390000 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/stopped-upgrade-390000/disk.qcow2
	I0610 03:36:58.499973    7676 main.go:141] libmachine: STDOUT: 
	I0610 03:36:58.499997    7676 main.go:141] libmachine: STDERR: 
	I0610 03:36:58.500003    7676 main.go:141] libmachine: Waiting for VM to start (ssh -p 51286 docker@127.0.0.1)...
	I0610 03:37:18.814368    7676 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/config.json ...
	I0610 03:37:18.815225    7676 machine.go:94] provisionDockerMachine start ...
	I0610 03:37:18.815429    7676 main.go:141] libmachine: Using SSH client type: native
	I0610 03:37:18.816005    7676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bf2980] 0x100bf51e0 <nil>  [] 0s} localhost 51286 <nil> <nil>}
	I0610 03:37:18.816021    7676 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 03:37:18.913821    7676 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 03:37:18.913867    7676 buildroot.go:166] provisioning hostname "stopped-upgrade-390000"
	I0610 03:37:18.914033    7676 main.go:141] libmachine: Using SSH client type: native
	I0610 03:37:18.914359    7676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bf2980] 0x100bf51e0 <nil>  [] 0s} localhost 51286 <nil> <nil>}
	I0610 03:37:18.914374    7676 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-390000 && echo "stopped-upgrade-390000" | sudo tee /etc/hostname
	I0610 03:37:19.002983    7676 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-390000
	
	I0610 03:37:19.003110    7676 main.go:141] libmachine: Using SSH client type: native
	I0610 03:37:19.003297    7676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bf2980] 0x100bf51e0 <nil>  [] 0s} localhost 51286 <nil> <nil>}
	I0610 03:37:19.003309    7676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-390000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-390000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-390000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 03:37:19.074821    7676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 03:37:19.074834    7676 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19046-4812/.minikube CaCertPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19046-4812/.minikube}
	I0610 03:37:19.074844    7676 buildroot.go:174] setting up certificates
	I0610 03:37:19.074850    7676 provision.go:84] configureAuth start
	I0610 03:37:19.074855    7676 provision.go:143] copyHostCerts
	I0610 03:37:19.074938    7676 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-4812/.minikube/cert.pem, removing ...
	I0610 03:37:19.074951    7676 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-4812/.minikube/cert.pem
	I0610 03:37:19.075325    7676 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19046-4812/.minikube/cert.pem (1123 bytes)
	I0610 03:37:19.075533    7676 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-4812/.minikube/key.pem, removing ...
	I0610 03:37:19.075538    7676 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-4812/.minikube/key.pem
	I0610 03:37:19.075598    7676 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19046-4812/.minikube/key.pem (1675 bytes)
	I0610 03:37:19.075718    7676 exec_runner.go:144] found /Users/jenkins/minikube-integration/19046-4812/.minikube/ca.pem, removing ...
	I0610 03:37:19.075721    7676 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19046-4812/.minikube/ca.pem
	I0610 03:37:19.075775    7676 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19046-4812/.minikube/ca.pem (1078 bytes)
	I0610 03:37:19.075872    7676 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-390000 san=[127.0.0.1 localhost minikube stopped-upgrade-390000]
	I0610 03:37:19.127423    7676 provision.go:177] copyRemoteCerts
	I0610 03:37:19.127460    7676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 03:37:19.127470    7676 sshutil.go:53] new ssh client: &{IP:localhost Port:51286 SSHKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/stopped-upgrade-390000/id_rsa Username:docker}
	I0610 03:37:19.166640    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 03:37:19.174428    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 03:37:19.181414    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0610 03:37:19.188135    7676 provision.go:87] duration metric: took 113.274625ms to configureAuth
	I0610 03:37:19.188143    7676 buildroot.go:189] setting minikube options for container-runtime
	I0610 03:37:19.188263    7676 config.go:182] Loaded profile config "stopped-upgrade-390000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 03:37:19.188303    7676 main.go:141] libmachine: Using SSH client type: native
	I0610 03:37:19.188393    7676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bf2980] 0x100bf51e0 <nil>  [] 0s} localhost 51286 <nil> <nil>}
	I0610 03:37:19.188398    7676 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 03:37:19.256858    7676 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 03:37:19.256867    7676 buildroot.go:70] root file system type: tmpfs
	I0610 03:37:19.256914    7676 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 03:37:19.256961    7676 main.go:141] libmachine: Using SSH client type: native
	I0610 03:37:19.257077    7676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bf2980] 0x100bf51e0 <nil>  [] 0s} localhost 51286 <nil> <nil>}
	I0610 03:37:19.257111    7676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 03:37:19.328230    7676 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 03:37:19.328301    7676 main.go:141] libmachine: Using SSH client type: native
	I0610 03:37:19.328419    7676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bf2980] 0x100bf51e0 <nil>  [] 0s} localhost 51286 <nil> <nil>}
	I0610 03:37:19.328428    7676 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 03:37:19.688756    7676 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 03:37:19.688773    7676 machine.go:97] duration metric: took 873.547292ms to provisionDockerMachine
	I0610 03:37:19.688784    7676 start.go:293] postStartSetup for "stopped-upgrade-390000" (driver="qemu2")
	I0610 03:37:19.688790    7676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 03:37:19.688838    7676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 03:37:19.688850    7676 sshutil.go:53] new ssh client: &{IP:localhost Port:51286 SSHKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/stopped-upgrade-390000/id_rsa Username:docker}
	I0610 03:37:19.726737    7676 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 03:37:19.728119    7676 info.go:137] Remote host: Buildroot 2021.02.12
	I0610 03:37:19.728126    7676 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-4812/.minikube/addons for local assets ...
	I0610 03:37:19.728207    7676 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19046-4812/.minikube/files for local assets ...
	I0610 03:37:19.728327    7676 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19046-4812/.minikube/files/etc/ssl/certs/56872.pem -> 56872.pem in /etc/ssl/certs
	I0610 03:37:19.728449    7676 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 03:37:19.730862    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/files/etc/ssl/certs/56872.pem --> /etc/ssl/certs/56872.pem (1708 bytes)
	I0610 03:37:19.738160    7676 start.go:296] duration metric: took 49.37225ms for postStartSetup
	I0610 03:37:19.738176    7676 fix.go:56] duration metric: took 21.299635583s for fixHost
	I0610 03:37:19.738212    7676 main.go:141] libmachine: Using SSH client type: native
	I0610 03:37:19.738314    7676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100bf2980] 0x100bf51e0 <nil>  [] 0s} localhost 51286 <nil> <nil>}
	I0610 03:37:19.738322    7676 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0610 03:37:19.806538    7676 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718015840.239770546
	
	I0610 03:37:19.806546    7676 fix.go:216] guest clock: 1718015840.239770546
	I0610 03:37:19.806550    7676 fix.go:229] Guest: 2024-06-10 03:37:20.239770546 -0700 PDT Remote: 2024-06-10 03:37:19.738177 -0700 PDT m=+21.421229334 (delta=501.593546ms)
	I0610 03:37:19.806561    7676 fix.go:200] guest clock delta is within tolerance: 501.593546ms
	I0610 03:37:19.806564    7676 start.go:83] releasing machines lock for "stopped-upgrade-390000", held for 21.368034541s
	I0610 03:37:19.806632    7676 ssh_runner.go:195] Run: cat /version.json
	I0610 03:37:19.806635    7676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 03:37:19.806640    7676 sshutil.go:53] new ssh client: &{IP:localhost Port:51286 SSHKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/stopped-upgrade-390000/id_rsa Username:docker}
	I0610 03:37:19.806651    7676 sshutil.go:53] new ssh client: &{IP:localhost Port:51286 SSHKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/stopped-upgrade-390000/id_rsa Username:docker}
	W0610 03:37:19.807172    7676 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51286: connect: connection refused
	I0610 03:37:19.807197    7676 retry.go:31] will retry after 187.508835ms: dial tcp [::1]:51286: connect: connection refused
	W0610 03:37:20.035054    7676 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0610 03:37:20.035119    7676 ssh_runner.go:195] Run: systemctl --version
	I0610 03:37:20.037003    7676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 03:37:20.038895    7676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 03:37:20.038934    7676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0610 03:37:20.042431    7676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0610 03:37:20.047793    7676 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 03:37:20.047807    7676 start.go:494] detecting cgroup driver to use...
	I0610 03:37:20.047884    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 03:37:20.055281    7676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0610 03:37:20.058782    7676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 03:37:20.062307    7676 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 03:37:20.062352    7676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 03:37:20.065917    7676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 03:37:20.069443    7676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 03:37:20.072698    7676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 03:37:20.075648    7676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 03:37:20.078616    7676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 03:37:20.082093    7676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0610 03:37:20.085520    7676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0610 03:37:20.088843    7676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 03:37:20.091573    7676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 03:37:20.095073    7676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 03:37:20.177117    7676 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 03:37:20.183820    7676 start.go:494] detecting cgroup driver to use...
	I0610 03:37:20.183892    7676 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 03:37:20.191301    7676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 03:37:20.196395    7676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 03:37:20.203128    7676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 03:37:20.208608    7676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 03:37:20.213560    7676 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 03:37:20.273299    7676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 03:37:20.278612    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 03:37:20.284481    7676 ssh_runner.go:195] Run: which cri-dockerd
	I0610 03:37:20.285888    7676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 03:37:20.289436    7676 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 03:37:20.295563    7676 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 03:37:20.374920    7676 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 03:37:20.452533    7676 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 03:37:20.452589    7676 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0610 03:37:20.457923    7676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 03:37:20.525077    7676 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 03:37:21.642941    7676 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.11786525s)
	I0610 03:37:21.643000    7676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0610 03:37:21.647488    7676 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0610 03:37:21.654275    7676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 03:37:21.658895    7676 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 03:37:21.736913    7676 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 03:37:21.805000    7676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 03:37:21.864116    7676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 03:37:21.870243    7676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0610 03:37:21.875170    7676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 03:37:21.957439    7676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0610 03:37:21.997911    7676 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 03:37:21.997994    7676 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 03:37:22.000063    7676 start.go:562] Will wait 60s for crictl version
	I0610 03:37:22.000116    7676 ssh_runner.go:195] Run: which crictl
	I0610 03:37:22.001655    7676 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 03:37:22.015945    7676 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0610 03:37:22.016008    7676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 03:37:22.032467    7676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 03:37:22.060687    7676 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0610 03:37:22.060818    7676 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0610 03:37:22.062169    7676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 03:37:22.066270    7676 kubeadm.go:877] updating cluster {Name:stopped-upgrade-390000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51320 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-390000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0610 03:37:22.066313    7676 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0610 03:37:22.066356    7676 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 03:37:22.077244    7676 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 03:37:22.077252    7676 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0610 03:37:22.077298    7676 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 03:37:22.080437    7676 ssh_runner.go:195] Run: which lz4
	I0610 03:37:22.081661    7676 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0610 03:37:22.082938    7676 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 03:37:22.082950    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0610 03:37:22.819524    7676 docker.go:649] duration metric: took 737.909583ms to copy over tarball
	I0610 03:37:22.819594    7676 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 03:37:24.015250    7676 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.195650625s)
	I0610 03:37:24.015269    7676 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 03:37:24.031224    7676 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 03:37:24.034557    7676 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0610 03:37:24.040005    7676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 03:37:24.107507    7676 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 03:37:25.792275    7676 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.684779292s)
	I0610 03:37:25.792368    7676 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 03:37:25.803715    7676 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 03:37:25.803725    7676 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0610 03:37:25.803730    7676 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0610 03:37:25.810138    7676 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 03:37:25.810168    7676 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0610 03:37:25.810215    7676 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0610 03:37:25.810294    7676 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0610 03:37:25.810361    7676 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 03:37:25.810413    7676 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0610 03:37:25.810456    7676 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0610 03:37:25.810650    7676 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 03:37:25.818929    7676 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0610 03:37:25.818993    7676 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0610 03:37:25.819051    7676 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0610 03:37:25.819119    7676 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0610 03:37:25.819156    7676 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 03:37:25.819726    7676 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 03:37:25.819750    7676 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0610 03:37:25.819783    7676 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 03:37:26.668879    7676 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0610 03:37:26.682238    7676 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0610 03:37:26.682267    7676 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0610 03:37:26.682329    7676 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0610 03:37:26.697272    7676 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0610 03:37:26.699439    7676 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0610 03:37:26.707498    7676 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0610 03:37:26.709555    7676 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0610 03:37:26.709572    7676 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0610 03:37:26.709611    7676 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0610 03:37:26.719589    7676 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0610 03:37:26.719610    7676 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0610 03:37:26.719666    7676 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0610 03:37:26.724614    7676 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 03:37:26.727133    7676 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0610 03:37:26.727249    7676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0610 03:37:26.730238    7676 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0610 03:37:26.730337    7676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0610 03:37:26.736630    7676 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0610 03:37:26.736636    7676 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0610 03:37:26.736649    7676 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 03:37:26.736661    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0610 03:37:26.736684    7676 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0610 03:37:26.736692    7676 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0610 03:37:26.736692    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0610 03:37:26.767423    7676 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0610 03:37:26.767439    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0610 03:37:26.767611    7676 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0610 03:37:26.832646    7676 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0610 03:37:26.833603    7676 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0610 03:37:26.834313    7676 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	W0610 03:37:26.841838    7676 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0610 03:37:26.841956    7676 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0610 03:37:26.863282    7676 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0610 03:37:26.863302    7676 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0610 03:37:26.863358    7676 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0610 03:37:26.867823    7676 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0610 03:37:26.867842    7676 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0610 03:37:26.867891    7676 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0610 03:37:26.897674    7676 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0610 03:37:26.897703    7676 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 03:37:26.897757    7676 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0610 03:37:26.911811    7676 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0610 03:37:26.918413    7676 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0610 03:37:26.966937    7676 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0610 03:37:26.967057    7676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0610 03:37:26.971178    7676 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0610 03:37:26.971208    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0610 03:37:26.994010    7676 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0610 03:37:26.994032    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0610 03:37:27.044821    7676 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0610 03:37:27.044933    7676 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 03:37:27.153581    7676 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0610 03:37:27.153606    7676 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0610 03:37:27.153627    7676 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 03:37:27.153640    7676 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0610 03:37:27.153677    7676 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 03:37:27.153679    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0610 03:37:27.195835    7676 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0610 03:37:27.195862    7676 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0610 03:37:27.195980    7676 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0610 03:37:27.197332    7676 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0610 03:37:27.197344    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0610 03:37:27.219858    7676 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0610 03:37:27.219872    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0610 03:37:27.496272    7676 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0610 03:37:27.496312    7676 cache_images.go:92] duration metric: took 1.692602333s to LoadCachedImages
	W0610 03:37:27.496366    7676 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0610 03:37:27.496372    7676 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0610 03:37:27.496426    7676 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-390000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-390000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 03:37:27.496497    7676 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 03:37:27.510063    7676 cni.go:84] Creating CNI manager for ""
	I0610 03:37:27.510076    7676 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:37:27.510084    7676 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 03:37:27.510092    7676 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-390000 NodeName:stopped-upgrade-390000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 03:37:27.510169    7676 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-390000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 03:37:27.510224    7676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0610 03:37:27.513735    7676 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 03:37:27.513764    7676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 03:37:27.516603    7676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0610 03:37:27.521418    7676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 03:37:27.526291    7676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0610 03:37:27.531635    7676 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0610 03:37:27.532716    7676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 03:37:27.536103    7676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 03:37:27.613253    7676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 03:37:27.620421    7676 certs.go:68] Setting up /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000 for IP: 10.0.2.15
	I0610 03:37:27.620432    7676 certs.go:194] generating shared ca certs ...
	I0610 03:37:27.620441    7676 certs.go:226] acquiring lock for ca certs: {Name:mk21a2158098c453d4ecfbaacf1fd5e5adc33d66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:37:27.620644    7676 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/ca.key
	I0610 03:37:27.620699    7676 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/proxy-client-ca.key
	I0610 03:37:27.620705    7676 certs.go:256] generating profile certs ...
	I0610 03:37:27.620771    7676 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/client.key
	I0610 03:37:27.620792    7676 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.key.3a4c56a7
	I0610 03:37:27.620802    7676 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.crt.3a4c56a7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0610 03:37:27.762362    7676 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.crt.3a4c56a7 ...
	I0610 03:37:27.762374    7676 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.crt.3a4c56a7: {Name:mka209fce3c1d7d58298def8d16d9dfa28e624d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:37:27.762610    7676 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.key.3a4c56a7 ...
	I0610 03:37:27.762616    7676 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.key.3a4c56a7: {Name:mk756e26fcf66ac152cef76d320a0821d848894c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:37:27.762740    7676 certs.go:381] copying /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.crt.3a4c56a7 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.crt
	I0610 03:37:27.762863    7676 certs.go:385] copying /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.key.3a4c56a7 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.key
	I0610 03:37:27.763024    7676 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/proxy-client.key
	I0610 03:37:27.763150    7676 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/5687.pem (1338 bytes)
	W0610 03:37:27.763189    7676 certs.go:480] ignoring /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/5687_empty.pem, impossibly tiny 0 bytes
	I0610 03:37:27.763194    7676 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 03:37:27.763212    7676 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem (1078 bytes)
	I0610 03:37:27.763229    7676 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem (1123 bytes)
	I0610 03:37:27.763245    7676 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/key.pem (1675 bytes)
	I0610 03:37:27.763281    7676 certs.go:484] found cert: /Users/jenkins/minikube-integration/19046-4812/.minikube/files/etc/ssl/certs/56872.pem (1708 bytes)
	I0610 03:37:27.763610    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 03:37:27.771006    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 03:37:27.777824    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 03:37:27.784543    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 03:37:27.791062    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0610 03:37:27.798185    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 03:37:27.805425    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 03:37:27.812253    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 03:37:27.818886    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/files/etc/ssl/certs/56872.pem --> /usr/share/ca-certificates/56872.pem (1708 bytes)
	I0610 03:37:27.825900    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 03:37:27.832640    7676 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/5687.pem --> /usr/share/ca-certificates/5687.pem (1338 bytes)
	I0610 03:37:27.839200    7676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 03:37:27.844206    7676 ssh_runner.go:195] Run: openssl version
	I0610 03:37:27.846009    7676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/56872.pem && ln -fs /usr/share/ca-certificates/56872.pem /etc/ssl/certs/56872.pem"
	I0610 03:37:27.849541    7676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/56872.pem
	I0610 03:37:27.850917    7676 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:20 /usr/share/ca-certificates/56872.pem
	I0610 03:37:27.850935    7676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/56872.pem
	I0610 03:37:27.852788    7676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/56872.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 03:37:27.855486    7676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 03:37:27.858452    7676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 03:37:27.859803    7676 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:33 /usr/share/ca-certificates/minikubeCA.pem
	I0610 03:37:27.859825    7676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 03:37:27.861376    7676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 03:37:27.864102    7676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5687.pem && ln -fs /usr/share/ca-certificates/5687.pem /etc/ssl/certs/5687.pem"
	I0610 03:37:27.866967    7676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5687.pem
	I0610 03:37:27.868439    7676 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:20 /usr/share/ca-certificates/5687.pem
	I0610 03:37:27.868455    7676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5687.pem
	I0610 03:37:27.870082    7676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5687.pem /etc/ssl/certs/51391683.0"
	I0610 03:37:27.873388    7676 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 03:37:27.874716    7676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 03:37:27.876914    7676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 03:37:27.878784    7676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 03:37:27.880709    7676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 03:37:27.882603    7676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 03:37:27.884425    7676 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 03:37:27.886242    7676 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-390000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51320 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-390000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0610 03:37:27.886313    7676 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 03:37:27.896813    7676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0610 03:37:27.900004    7676 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0610 03:37:27.900009    7676 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0610 03:37:27.900012    7676 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0610 03:37:27.900029    7676 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0610 03:37:27.902838    7676 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0610 03:37:27.903130    7676 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-390000" does not appear in /Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:37:27.903232    7676 kubeconfig.go:62] /Users/jenkins/minikube-integration/19046-4812/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-390000" cluster setting kubeconfig missing "stopped-upgrade-390000" context setting]
	I0610 03:37:27.903411    7676 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/kubeconfig: {Name:mke25032b58aa44d6357ccc49c0a5254f131209e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:37:27.904700    7676 kapi.go:59] client config for stopped-upgrade-390000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/client.key", CAFile:"/Users/jenkins/minikube-integration/19046-4812/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101f80460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 03:37:27.905054    7676 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0610 03:37:27.908001    7676 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-390000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0610 03:37:27.908008    7676 kubeadm.go:1154] stopping kube-system containers ...
	I0610 03:37:27.908056    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 03:37:27.918870    7676 docker.go:483] Stopping containers: [d5521dc872d7 bd2137ddded5 7e88f7ae5ad5 734fef33c2cb e6410a69bdaf 58bb62977b0b 7b9a20d5b4ac e73126707c04]
	I0610 03:37:27.918904    7676 ssh_runner.go:195] Run: docker stop d5521dc872d7 bd2137ddded5 7e88f7ae5ad5 734fef33c2cb e6410a69bdaf 58bb62977b0b 7b9a20d5b4ac e73126707c04
	I0610 03:37:27.930635    7676 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0610 03:37:27.936806    7676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 03:37:27.939731    7676 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 03:37:27.939739    7676 kubeadm.go:156] found existing configuration files:
	
	I0610 03:37:27.939777    7676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/admin.conf
	I0610 03:37:27.942547    7676 kubeadm.go:162] "https://control-plane.minikube.internal:51320" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 03:37:27.942585    7676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 03:37:27.946044    7676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/kubelet.conf
	I0610 03:37:27.949350    7676 kubeadm.go:162] "https://control-plane.minikube.internal:51320" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 03:37:27.949389    7676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 03:37:27.952360    7676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/controller-manager.conf
	I0610 03:37:27.955077    7676 kubeadm.go:162] "https://control-plane.minikube.internal:51320" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 03:37:27.955105    7676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 03:37:27.958254    7676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/scheduler.conf
	I0610 03:37:27.961320    7676 kubeadm.go:162] "https://control-plane.minikube.internal:51320" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 03:37:27.961358    7676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 03:37:27.964175    7676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 03:37:27.967205    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 03:37:27.991758    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 03:37:28.262820    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0610 03:37:28.396291    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 03:37:28.423505    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0610 03:37:28.458867    7676 api_server.go:52] waiting for apiserver process to appear ...
	I0610 03:37:28.458958    7676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 03:37:28.961036    7676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 03:37:29.460971    7676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 03:37:29.465274    7676 api_server.go:72] duration metric: took 1.006424583s to wait for apiserver process to appear ...
	I0610 03:37:29.465285    7676 api_server.go:88] waiting for apiserver healthz status ...
	I0610 03:37:29.465294    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:34.467330    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:34.467379    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:39.467604    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:39.467633    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:44.468011    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:44.468047    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:49.468477    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:49.468554    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:54.469477    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:54.469526    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:37:59.470899    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:37:59.470961    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:04.472207    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:04.472229    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:09.472821    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:09.472846    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:14.473808    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:14.473849    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:19.475890    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:19.475926    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:24.478115    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:24.478158    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:29.480359    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:29.480496    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:38:29.494291    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:38:29.494369    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:38:29.506666    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:38:29.506739    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:38:29.517069    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:38:29.517134    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:38:29.527426    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:38:29.527517    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:38:29.538103    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:38:29.538176    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:38:29.548615    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:38:29.548691    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:38:29.558711    7676 logs.go:276] 0 containers: []
	W0610 03:38:29.558722    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:38:29.558782    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:38:29.569173    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:38:29.569194    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:38:29.569201    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:38:29.606728    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:38:29.606739    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:38:29.647905    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:38:29.647917    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:38:29.660115    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:38:29.660127    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:38:29.676576    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:38:29.676588    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:38:29.776605    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:38:29.776616    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:38:29.789647    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:38:29.789663    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:38:29.807201    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:38:29.807217    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:38:29.819099    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:38:29.819113    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:38:29.831184    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:38:29.831196    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:38:29.846764    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:38:29.846774    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:38:29.858161    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:38:29.858172    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:38:29.869258    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:38:29.869270    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:38:29.894086    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:38:29.894094    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:38:29.898090    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:38:29.898099    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:38:29.911856    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:38:29.911865    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:38:29.925533    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:38:29.925544    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:38:32.441985    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:37.444184    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:37.444406    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:38:37.464033    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:38:37.464122    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:38:37.476576    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:38:37.476647    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:38:37.488137    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:38:37.488204    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:38:37.498740    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:38:37.498824    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:38:37.509276    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:38:37.509345    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:38:37.519491    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:38:37.519559    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:38:37.529521    7676 logs.go:276] 0 containers: []
	W0610 03:38:37.529532    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:38:37.529590    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:38:37.540119    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:38:37.540136    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:38:37.540143    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:38:37.578286    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:38:37.578297    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:38:37.596741    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:38:37.596751    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:38:37.608549    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:38:37.608561    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:38:37.648429    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:38:37.648441    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:38:37.660106    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:38:37.660117    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:38:37.671724    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:38:37.671736    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:38:37.689253    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:38:37.689265    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:38:37.714065    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:38:37.714078    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:38:37.725975    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:38:37.725989    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:38:37.741568    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:38:37.741581    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:38:37.753174    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:38:37.753184    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:38:37.791625    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:38:37.791633    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:38:37.795811    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:38:37.795819    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:38:37.810103    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:38:37.810116    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:38:37.824658    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:38:37.824668    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:38:37.838747    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:38:37.838758    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:38:40.351011    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:45.353228    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:45.353341    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:38:45.365454    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:38:45.365527    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:38:45.378291    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:38:45.378399    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:38:45.389780    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:38:45.389857    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:38:45.400481    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:38:45.400554    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:38:45.410747    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:38:45.410816    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:38:45.421098    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:38:45.421163    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:38:45.432558    7676 logs.go:276] 0 containers: []
	W0610 03:38:45.432570    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:38:45.432627    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:38:45.443014    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:38:45.443036    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:38:45.443042    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:38:45.480103    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:38:45.480116    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:38:45.494881    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:38:45.494892    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:38:45.509644    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:38:45.509655    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:38:45.521226    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:38:45.521236    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:38:45.532433    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:38:45.532443    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:38:45.546163    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:38:45.546173    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:38:45.571261    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:38:45.571272    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:38:45.583313    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:38:45.583325    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:38:45.597698    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:38:45.597708    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:38:45.635543    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:38:45.635554    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:38:45.650301    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:38:45.650315    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:38:45.667375    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:38:45.667386    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:38:45.671615    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:38:45.671621    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:38:45.706938    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:38:45.706948    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:38:45.718594    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:38:45.718607    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:38:45.733687    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:38:45.733697    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:38:48.247129    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:38:53.249375    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:38:53.249490    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:38:53.260123    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:38:53.260191    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:38:53.270604    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:38:53.270674    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:38:53.281103    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:38:53.281170    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:38:53.291232    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:38:53.291302    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:38:53.301957    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:38:53.302022    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:38:53.319734    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:38:53.319795    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:38:53.329748    7676 logs.go:276] 0 containers: []
	W0610 03:38:53.329759    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:38:53.329814    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:38:53.340625    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:38:53.340645    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:38:53.340651    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:38:53.351653    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:38:53.356420    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:38:53.395344    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:38:53.395359    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:38:53.414865    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:38:53.414876    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:38:53.451481    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:38:53.451491    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:38:53.458116    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:38:53.458124    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:38:53.472099    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:38:53.472110    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:38:53.487025    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:38:53.487035    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:38:53.498921    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:38:53.498930    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:38:53.517186    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:38:53.517195    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:38:53.556103    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:38:53.556113    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:38:53.567421    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:38:53.567432    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:38:53.579258    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:38:53.579271    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:38:53.593528    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:38:53.593539    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:38:53.618870    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:38:53.618880    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:38:53.631260    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:38:53.631276    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:38:53.645261    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:38:53.645273    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:38:56.161565    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:01.163574    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:01.163809    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:01.182389    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:39:01.182485    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:01.197300    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:39:01.197376    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:01.209282    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:39:01.209342    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:01.219557    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:39:01.219625    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:01.233361    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:39:01.233427    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:01.243741    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:39:01.243807    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:01.253509    7676 logs.go:276] 0 containers: []
	W0610 03:39:01.253523    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:01.253576    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:01.264145    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:39:01.264166    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:01.264172    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:01.299370    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:39:01.299383    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:39:01.313856    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:39:01.313867    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:39:01.329466    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:39:01.329480    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:39:01.340373    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:39:01.340385    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:39:01.354860    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:01.354874    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:01.379658    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:39:01.379669    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:01.391842    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:39:01.391853    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:39:01.410409    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:01.410421    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:01.450832    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:01.450853    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:01.456418    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:39:01.456429    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:39:01.471426    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:39:01.471437    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:39:01.511156    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:39:01.511168    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:39:01.526652    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:39:01.526665    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:39:01.539223    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:39:01.539235    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:39:01.553689    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:39:01.553702    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:39:01.566683    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:39:01.566696    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:39:04.084127    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:09.086408    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:09.086571    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:09.098447    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:39:09.098522    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:09.109436    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:39:09.109506    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:09.119692    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:39:09.119765    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:09.130636    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:39:09.130717    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:09.143711    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:39:09.143776    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:09.154305    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:39:09.154375    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:09.164293    7676 logs.go:276] 0 containers: []
	W0610 03:39:09.164303    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:09.164363    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:09.174724    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:39:09.174744    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:39:09.174749    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:39:09.188984    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:39:09.188995    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:39:09.202176    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:09.202187    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:09.206868    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:39:09.206873    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:39:09.220884    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:39:09.220896    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:39:09.235101    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:39:09.235111    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:39:09.246246    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:39:09.246258    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:09.258291    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:09.258303    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:09.296111    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:09.296120    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:09.332420    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:39:09.332430    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:39:09.347276    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:09.347293    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:09.372190    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:39:09.372198    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:39:09.410539    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:39:09.410553    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:39:09.425309    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:39:09.425320    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:39:09.443063    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:39:09.443075    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:39:09.456414    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:39:09.456430    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:39:09.470876    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:39:09.470886    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:39:11.987701    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:16.990132    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:16.990380    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:17.024375    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:39:17.024477    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:17.040629    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:39:17.040711    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:17.053516    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:39:17.053587    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:17.064709    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:39:17.064783    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:17.075103    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:39:17.075170    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:17.085590    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:39:17.085657    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:17.096341    7676 logs.go:276] 0 containers: []
	W0610 03:39:17.096354    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:17.096417    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:17.106803    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:39:17.106820    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:39:17.106826    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:39:17.119093    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:39:17.119105    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:39:17.133767    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:39:17.133778    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:39:17.145128    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:17.145137    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:17.149771    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:39:17.149778    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:39:17.163844    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:39:17.163855    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:39:17.175385    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:39:17.175396    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:39:17.195873    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:39:17.195883    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:39:17.217562    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:17.217572    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:17.241535    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:17.241544    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:17.278935    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:39:17.278945    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:39:17.316471    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:39:17.316482    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:39:17.331147    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:39:17.331157    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:39:17.343308    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:17.343321    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:17.377483    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:39:17.377493    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:39:17.391464    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:39:17.391475    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:39:17.405652    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:39:17.405664    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:19.919852    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:24.922557    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:24.922773    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:24.938916    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:39:24.939001    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:24.951023    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:39:24.951102    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:24.961278    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:39:24.961344    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:24.972175    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:39:24.972249    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:24.982986    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:39:24.983051    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:24.993727    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:39:24.993790    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:25.003926    7676 logs.go:276] 0 containers: []
	W0610 03:39:25.003937    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:25.003995    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:25.024105    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:39:25.024125    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:39:25.024130    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:39:25.063145    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:39:25.063157    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:39:25.075184    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:25.075197    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:25.110597    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:39:25.110608    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:39:25.125168    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:39:25.125182    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:39:25.140239    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:25.140253    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:25.177828    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:39:25.177836    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:39:25.192028    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:39:25.192039    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:39:25.208912    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:39:25.208923    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:39:25.226288    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:39:25.226298    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:39:25.238267    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:39:25.238281    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:25.250010    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:25.250022    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:25.254457    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:39:25.254468    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:39:25.265850    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:39:25.265861    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:39:25.277761    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:39:25.277772    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:39:25.291544    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:39:25.291554    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:39:25.304003    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:25.304013    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:27.830403    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:32.832477    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:32.832570    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:32.843833    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:39:32.843907    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:32.856216    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:39:32.856293    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:32.866330    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:39:32.866397    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:32.877064    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:39:32.877145    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:32.887758    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:39:32.887827    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:32.898524    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:39:32.898591    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:32.908587    7676 logs.go:276] 0 containers: []
	W0610 03:39:32.908600    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:32.908662    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:32.919321    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:39:32.919337    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:39:32.919343    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:32.931104    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:39:32.931116    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:39:32.942725    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:32.942736    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:32.947060    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:32.947067    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:32.982443    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:39:32.982454    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:39:32.996923    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:39:32.996934    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:39:33.034763    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:39:33.034774    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:39:33.052841    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:39:33.052852    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:39:33.064130    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:39:33.064144    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:39:33.078842    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:39:33.078859    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:39:33.090352    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:33.090363    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:33.129498    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:39:33.129514    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:39:33.144198    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:39:33.144211    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:39:33.161919    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:33.161929    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:33.186520    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:39:33.186533    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:39:33.198115    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:39:33.198128    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:39:33.210296    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:39:33.210311    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:39:35.726234    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:40.828983    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:40.829324    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:40.865912    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:39:40.866036    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:40.883506    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:39:40.883583    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:40.898794    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:39:40.898877    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:40.910104    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:39:40.910181    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:40.920967    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:39:40.921040    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:40.931776    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:39:40.931851    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:40.942241    7676 logs.go:276] 0 containers: []
	W0610 03:39:40.942253    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:40.942322    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:40.952704    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:39:40.952721    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:39:40.952727    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:39:40.966305    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:39:40.966317    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:39:40.977936    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:39:40.977947    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:39:40.989328    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:39:40.989341    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:39:41.011108    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:39:41.011119    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:39:41.026375    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:39:41.026388    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:39:41.040524    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:39:41.040535    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:39:41.054280    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:39:41.054289    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:39:41.092104    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:39:41.092114    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:39:41.108681    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:39:41.108692    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:39:41.122993    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:41.123003    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:41.161530    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:41.161542    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:41.166233    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:39:41.166239    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:41.178458    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:41.178470    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:41.216375    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:39:41.216387    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:39:41.228432    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:39:41.228442    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:39:41.242856    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:41.242867    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:43.768577    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:48.770970    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:48.771231    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:48.795497    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:39:48.795606    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:48.811859    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:39:48.811927    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:48.824341    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:39:48.824407    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:48.835669    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:39:48.835748    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:48.846751    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:39:48.846817    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:48.862442    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:39:48.862518    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:48.872859    7676 logs.go:276] 0 containers: []
	W0610 03:39:48.872870    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:48.872926    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:48.883306    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:39:48.883323    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:48.883327    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:48.922222    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:39:48.922236    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:39:48.937422    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:39:48.937434    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:39:48.952229    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:39:48.952241    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:39:48.969904    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:39:48.969917    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:39:48.983917    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:39:48.983930    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:39:48.999524    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:39:48.999537    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:39:49.013475    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:39:49.013485    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:39:49.024750    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:49.024759    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:49.060251    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:39:49.060263    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:39:49.074342    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:49.074355    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:49.097214    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:39:49.097222    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:49.108614    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:49.108625    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:49.112533    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:39:49.112543    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:39:49.125793    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:39:49.125803    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:39:49.163068    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:39:49.163081    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:39:49.174029    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:39:49.174040    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:39:51.687366    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:39:56.689741    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:39:56.690044    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:39:56.722052    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:39:56.722187    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:39:56.741262    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:39:56.741352    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:39:56.755101    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:39:56.755183    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:39:56.767093    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:39:56.767168    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:39:56.779171    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:39:56.779245    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:39:56.790055    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:39:56.790129    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:39:56.800345    7676 logs.go:276] 0 containers: []
	W0610 03:39:56.800356    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:39:56.800415    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:39:56.811119    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:39:56.811140    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:39:56.811146    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:39:56.832697    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:39:56.832708    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:39:56.844986    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:39:56.844998    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:39:56.868017    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:39:56.868025    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:39:56.882197    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:39:56.882208    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:39:56.898369    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:39:56.898380    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:39:56.910291    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:39:56.910301    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:39:56.922140    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:39:56.922151    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:39:56.962029    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:39:56.962040    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:39:56.966765    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:39:56.966771    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:39:57.004244    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:39:57.004254    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:39:57.017882    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:39:57.017891    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:39:57.028645    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:39:57.028657    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:39:57.043147    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:39:57.043158    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:39:57.054814    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:39:57.054824    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:39:57.067396    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:39:57.067407    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:39:57.104888    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:39:57.104899    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:39:59.621042    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:04.623444    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:04.623737    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:04.645626    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:40:04.645735    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:04.662856    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:40:04.662931    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:04.675374    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:40:04.675438    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:04.686250    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:40:04.686322    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:04.696740    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:40:04.696802    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:04.707110    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:40:04.707174    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:04.721413    7676 logs.go:276] 0 containers: []
	W0610 03:40:04.721423    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:04.721474    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:04.732511    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:40:04.732529    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:40:04.732534    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:40:04.746308    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:40:04.746320    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:40:04.766212    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:40:04.766227    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:40:04.780795    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:04.780806    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:04.803230    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:40:04.803237    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:04.815931    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:04.815941    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:04.820244    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:04.820251    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:04.854078    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:40:04.854090    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:40:04.866149    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:40:04.866160    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:40:04.905735    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:40:04.905751    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:40:04.919979    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:40:04.919995    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:40:04.937715    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:40:04.937726    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:40:04.951565    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:04.951577    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:04.990365    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:40:04.990385    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:40:05.003392    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:40:05.003405    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:40:05.015376    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:40:05.015387    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:40:05.029426    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:40:05.029437    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:40:07.543098    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:12.545916    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:12.546267    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:12.581681    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:40:12.581819    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:12.608058    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:40:12.608147    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:12.620855    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:40:12.620928    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:12.636453    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:40:12.636529    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:12.648088    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:40:12.648159    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:12.659064    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:40:12.659131    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:12.669522    7676 logs.go:276] 0 containers: []
	W0610 03:40:12.669534    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:12.669593    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:12.680459    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:40:12.680475    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:12.680482    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:12.717755    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:12.717764    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:12.721781    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:40:12.721787    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:40:12.760871    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:40:12.760882    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:40:12.775839    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:40:12.775848    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:40:12.793151    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:40:12.793163    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:12.804620    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:40:12.804631    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:40:12.820119    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:40:12.820129    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:40:12.841477    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:40:12.841487    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:40:12.855491    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:40:12.855502    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:40:12.867737    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:12.867748    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:12.902959    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:40:12.902971    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:40:12.914790    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:40:12.914801    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:40:12.926009    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:12.926022    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:12.950300    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:40:12.950310    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:40:12.964610    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:40:12.964620    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:40:12.978083    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:40:12.978093    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:40:15.490637    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:20.491079    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:20.491199    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:20.503611    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:40:20.503681    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:20.515817    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:40:20.515888    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:20.526464    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:40:20.526535    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:20.537360    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:40:20.537434    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:20.547407    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:40:20.547479    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:20.561253    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:40:20.561321    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:20.571718    7676 logs.go:276] 0 containers: []
	W0610 03:40:20.571730    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:20.571792    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:20.582275    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:40:20.582293    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:40:20.582298    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:40:20.600377    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:40:20.600388    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:40:20.614639    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:40:20.614653    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:40:20.626104    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:40:20.626114    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:40:20.641035    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:40:20.641045    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:40:20.657466    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:20.657477    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:20.695408    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:40:20.695418    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:40:20.733352    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:40:20.733362    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:40:20.747922    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:40:20.747933    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:40:20.759437    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:20.759449    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:20.783507    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:40:20.783517    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:20.796079    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:20.796096    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:20.800441    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:40:20.800449    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:40:20.811463    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:40:20.811478    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:40:20.822739    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:20.822750    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:20.857603    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:40:20.857618    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:40:20.872010    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:40:20.872021    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:40:23.391594    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:28.394183    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:28.394342    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:28.408462    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:40:28.408536    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:28.427687    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:40:28.427758    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:28.442132    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:40:28.442200    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:28.452642    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:40:28.456441    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:28.466825    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:40:28.466898    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:28.477344    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:40:28.477412    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:28.487107    7676 logs.go:276] 0 containers: []
	W0610 03:40:28.487118    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:28.487169    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:28.497959    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:40:28.497977    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:40:28.497983    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:28.509858    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:28.509869    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:28.546799    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:40:28.546807    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:40:28.565187    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:40:28.565198    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:40:28.575641    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:40:28.575652    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:40:28.587619    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:40:28.587631    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:40:28.600159    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:40:28.600171    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:40:28.612008    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:40:28.612019    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:40:28.626506    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:40:28.626516    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:40:28.638015    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:28.638031    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:28.642226    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:40:28.642232    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:40:28.656134    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:40:28.656145    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:40:28.674392    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:40:28.674403    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:40:28.688500    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:28.688511    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:28.722903    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:40:28.722915    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:40:28.761052    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:40:28.761062    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:40:28.778811    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:28.778822    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:31.304908    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:36.306653    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:36.306870    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:36.331430    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:40:36.331516    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:36.345532    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:40:36.345604    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:36.357339    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:40:36.357409    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:36.367877    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:40:36.367948    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:36.377956    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:40:36.378028    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:36.388756    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:40:36.388816    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:36.399806    7676 logs.go:276] 0 containers: []
	W0610 03:40:36.399822    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:36.399898    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:36.410540    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:40:36.410557    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:40:36.410563    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:40:36.425487    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:40:36.425497    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:40:36.436769    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:40:36.436782    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:40:36.448280    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:40:36.448292    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:36.460619    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:36.460629    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:36.498787    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:40:36.498796    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:40:36.514359    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:40:36.514370    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:40:36.528170    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:40:36.528181    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:40:36.540064    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:40:36.540075    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:40:36.553988    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:36.553999    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:36.578421    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:36.578431    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:36.582978    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:36.582984    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:36.618035    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:40:36.618046    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:40:36.629713    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:40:36.629724    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:40:36.641093    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:40:36.641105    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:40:36.666957    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:40:36.666968    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:40:36.710124    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:40:36.710135    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:40:39.226274    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:44.228611    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:44.228860    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:44.257618    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:40:44.257722    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:44.274533    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:40:44.274614    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:44.288070    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:40:44.288155    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:44.299673    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:40:44.299748    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:44.310018    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:40:44.310087    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:44.321052    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:40:44.321118    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:44.334927    7676 logs.go:276] 0 containers: []
	W0610 03:40:44.334940    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:44.335000    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:44.345459    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:40:44.345479    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:44.345484    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:44.385689    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:44.385697    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:44.389893    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:40:44.389901    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:40:44.403635    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:40:44.403646    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:40:44.415048    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:44.415060    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:44.439077    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:44.439084    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:44.476746    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:40:44.476756    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:40:44.491215    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:40:44.491227    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:40:44.505877    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:40:44.505886    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:40:44.517272    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:40:44.517285    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:40:44.531179    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:40:44.531190    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:40:44.547845    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:40:44.547858    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:40:44.561556    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:40:44.561570    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:40:44.572825    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:40:44.572837    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:40:44.587342    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:40:44.587355    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:40:44.623727    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:40:44.623737    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:40:44.643451    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:40:44.643468    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:47.158840    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:40:52.159958    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:40:52.160154    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:40:52.179522    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:40:52.179605    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:40:52.194467    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:40:52.194544    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:40:52.206074    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:40:52.206138    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:40:52.216889    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:40:52.216957    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:40:52.227488    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:40:52.227554    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:40:52.238216    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:40:52.238280    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:40:52.257238    7676 logs.go:276] 0 containers: []
	W0610 03:40:52.257251    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:40:52.257314    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:40:52.275181    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:40:52.275200    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:40:52.275205    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:40:52.296383    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:40:52.296396    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:40:52.332888    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:40:52.332899    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:40:52.344361    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:40:52.344374    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:40:52.361119    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:40:52.361130    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:40:52.372851    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:40:52.372863    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:40:52.390227    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:40:52.390237    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:40:52.427419    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:40:52.427431    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:40:52.441584    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:40:52.441597    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:40:52.456185    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:40:52.456196    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:40:52.473814    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:40:52.473825    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:40:52.485943    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:40:52.485953    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:40:52.497569    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:40:52.497579    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:40:52.520554    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:40:52.520563    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:40:52.531807    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:40:52.531818    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:40:52.569644    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:40:52.569655    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:40:52.573655    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:40:52.573662    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:40:55.093459    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:00.096092    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:00.096334    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:41:00.117933    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:41:00.118050    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:41:00.136289    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:41:00.136369    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:41:00.148389    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:41:00.148455    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:41:00.166361    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:41:00.166430    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:41:00.177220    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:41:00.177293    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:41:00.187344    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:41:00.187408    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:41:00.202739    7676 logs.go:276] 0 containers: []
	W0610 03:41:00.202751    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:41:00.202812    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:41:00.213155    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:41:00.213174    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:41:00.213179    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:41:00.224615    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:41:00.224627    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:41:00.238339    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:41:00.238349    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:41:00.275745    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:41:00.275757    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:41:00.292540    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:41:00.292550    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:41:00.304608    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:41:00.304619    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:41:00.315545    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:41:00.315557    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:00.328548    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:41:00.328560    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:41:00.364939    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:41:00.364952    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:41:00.379523    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:41:00.379532    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:41:00.391496    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:41:00.391508    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:41:00.409941    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:41:00.409951    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:41:00.429521    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:41:00.429533    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:41:00.441944    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:41:00.441957    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:41:00.480616    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:41:00.480627    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:41:00.501018    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:41:00.501029    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:41:00.522931    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:41:00.522938    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:41:03.029136    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:08.031501    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:08.031660    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:41:08.045073    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:41:08.045151    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:41:08.056531    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:41:08.056599    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:41:08.067216    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:41:08.067279    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:41:08.077980    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:41:08.078065    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:41:08.091108    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:41:08.091171    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:41:08.101801    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:41:08.101868    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:41:08.112284    7676 logs.go:276] 0 containers: []
	W0610 03:41:08.112296    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:41:08.112358    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:41:08.122485    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:41:08.122504    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:41:08.122509    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:41:08.126966    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:41:08.126973    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:41:08.137849    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:41:08.137863    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:41:08.155556    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:41:08.155566    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:41:08.169455    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:41:08.169467    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:41:08.184040    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:41:08.184052    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:41:08.226601    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:41:08.226613    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:41:08.240936    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:41:08.240947    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:41:08.252688    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:41:08.252698    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:41:08.264634    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:41:08.264647    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:41:08.302344    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:41:08.302357    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:41:08.340345    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:41:08.340357    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:41:08.352025    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:41:08.352038    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:41:08.366659    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:41:08.366669    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:41:08.385846    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:41:08.385857    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:41:08.409817    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:41:08.409828    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:41:08.431802    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:41:08.431810    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:10.944860    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:15.947222    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:15.947364    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:41:15.962436    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:41:15.962522    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:41:15.976524    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:41:15.976594    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:41:15.986827    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:41:15.986898    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:41:15.996477    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:41:15.996551    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:41:16.013932    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:41:16.013997    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:41:16.025617    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:41:16.025684    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:41:16.035618    7676 logs.go:276] 0 containers: []
	W0610 03:41:16.035631    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:41:16.035684    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:41:16.046757    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:41:16.046777    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:41:16.046783    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:41:16.082740    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:41:16.082752    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:41:16.096486    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:41:16.096496    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:41:16.110495    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:41:16.110504    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:41:16.130361    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:41:16.130374    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:41:16.134437    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:41:16.134446    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:41:16.171318    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:41:16.171333    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:41:16.182270    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:41:16.182286    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:41:16.198969    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:41:16.198979    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:41:16.210312    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:41:16.210322    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:41:16.221683    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:41:16.221695    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:41:16.258788    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:41:16.258799    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:41:16.274281    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:41:16.274293    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:41:16.285996    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:41:16.286007    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:41:16.300955    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:41:16.300970    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:41:16.318230    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:41:16.318241    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:41:16.339914    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:41:16.339925    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:18.853598    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:23.854379    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:23.854521    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:41:23.871118    7676 logs.go:276] 2 containers: [af28cc768591 734fef33c2cb]
	I0610 03:41:23.871205    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:41:23.883781    7676 logs.go:276] 2 containers: [eef09804b887 d5521dc872d7]
	I0610 03:41:23.883859    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:41:23.894649    7676 logs.go:276] 1 containers: [9701dab932d1]
	I0610 03:41:23.894712    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:41:23.905333    7676 logs.go:276] 2 containers: [1fb56e6ef1d9 7e88f7ae5ad5]
	I0610 03:41:23.905409    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:41:23.915294    7676 logs.go:276] 1 containers: [ea4ebeeaaca0]
	I0610 03:41:23.915363    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:41:23.925216    7676 logs.go:276] 2 containers: [5ffe2ba197cb e6410a69bdaf]
	I0610 03:41:23.925283    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:41:23.937448    7676 logs.go:276] 0 containers: []
	W0610 03:41:23.937460    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:41:23.937523    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:41:23.948592    7676 logs.go:276] 2 containers: [ea45c42c20bc a95f282d7466]
	I0610 03:41:23.948610    7676 logs.go:123] Gathering logs for storage-provisioner [a95f282d7466] ...
	I0610 03:41:23.948616    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a95f282d7466"
	I0610 03:41:23.960525    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:41:23.960537    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:41:23.984412    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:41:23.984426    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:41:23.988859    7676 logs.go:123] Gathering logs for etcd [eef09804b887] ...
	I0610 03:41:23.988868    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eef09804b887"
	I0610 03:41:24.029000    7676 logs.go:123] Gathering logs for kube-apiserver [af28cc768591] ...
	I0610 03:41:24.029015    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af28cc768591"
	I0610 03:41:24.043441    7676 logs.go:123] Gathering logs for kube-apiserver [734fef33c2cb] ...
	I0610 03:41:24.043453    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 734fef33c2cb"
	I0610 03:41:24.081015    7676 logs.go:123] Gathering logs for etcd [d5521dc872d7] ...
	I0610 03:41:24.081029    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5521dc872d7"
	I0610 03:41:24.095349    7676 logs.go:123] Gathering logs for coredns [9701dab932d1] ...
	I0610 03:41:24.095359    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9701dab932d1"
	I0610 03:41:24.106900    7676 logs.go:123] Gathering logs for kube-proxy [ea4ebeeaaca0] ...
	I0610 03:41:24.106914    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea4ebeeaaca0"
	I0610 03:41:24.118620    7676 logs.go:123] Gathering logs for kube-controller-manager [5ffe2ba197cb] ...
	I0610 03:41:24.118631    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffe2ba197cb"
	I0610 03:41:24.136211    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:41:24.136222    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:41:24.173125    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:41:24.173133    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:41:24.206894    7676 logs.go:123] Gathering logs for kube-controller-manager [e6410a69bdaf] ...
	I0610 03:41:24.206912    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6410a69bdaf"
	I0610 03:41:24.221446    7676 logs.go:123] Gathering logs for storage-provisioner [ea45c42c20bc] ...
	I0610 03:41:24.221458    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea45c42c20bc"
	I0610 03:41:24.234907    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:41:24.234919    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:41:24.250335    7676 logs.go:123] Gathering logs for kube-scheduler [1fb56e6ef1d9] ...
	I0610 03:41:24.250346    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb56e6ef1d9"
	I0610 03:41:24.262791    7676 logs.go:123] Gathering logs for kube-scheduler [7e88f7ae5ad5] ...
	I0610 03:41:24.262803    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e88f7ae5ad5"
	I0610 03:41:26.779255    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:31.781815    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:31.781918    7676 kubeadm.go:591] duration metric: took 4m3.782328583s to restartPrimaryControlPlane
	W0610 03:41:31.781989    7676 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0610 03:41:31.782023    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0610 03:41:32.832666    7676 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.050616792s)
	I0610 03:41:32.832736    7676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 03:41:32.837705    7676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 03:41:32.840560    7676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 03:41:32.843427    7676 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 03:41:32.843433    7676 kubeadm.go:156] found existing configuration files:
	
	I0610 03:41:32.843456    7676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/admin.conf
	I0610 03:41:32.845898    7676 kubeadm.go:162] "https://control-plane.minikube.internal:51320" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 03:41:32.845918    7676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 03:41:32.848517    7676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/kubelet.conf
	I0610 03:41:32.851679    7676 kubeadm.go:162] "https://control-plane.minikube.internal:51320" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 03:41:32.851702    7676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 03:41:32.854714    7676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/controller-manager.conf
	I0610 03:41:32.857221    7676 kubeadm.go:162] "https://control-plane.minikube.internal:51320" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 03:41:32.857248    7676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 03:41:32.860344    7676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/scheduler.conf
	I0610 03:41:32.863353    7676 kubeadm.go:162] "https://control-plane.minikube.internal:51320" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51320 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 03:41:32.863379    7676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 03:41:32.865850    7676 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 03:41:32.883354    7676 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0610 03:41:32.883389    7676 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 03:41:32.931115    7676 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 03:41:32.931191    7676 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 03:41:32.931271    7676 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 03:41:32.980988    7676 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 03:41:32.988121    7676 out.go:204]   - Generating certificates and keys ...
	I0610 03:41:32.988195    7676 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 03:41:32.988244    7676 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 03:41:32.988359    7676 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 03:41:32.988472    7676 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 03:41:32.988510    7676 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 03:41:32.988538    7676 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 03:41:32.988571    7676 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 03:41:32.988616    7676 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 03:41:32.988658    7676 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 03:41:32.988710    7676 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 03:41:32.988745    7676 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 03:41:32.988847    7676 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 03:41:33.129420    7676 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 03:41:33.243038    7676 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 03:41:33.344548    7676 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 03:41:33.440698    7676 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 03:41:33.468982    7676 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 03:41:33.469524    7676 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 03:41:33.469563    7676 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 03:41:33.544616    7676 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 03:41:33.548521    7676 out.go:204]   - Booting up control plane ...
	I0610 03:41:33.548571    7676 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 03:41:33.548633    7676 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 03:41:33.548664    7676 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 03:41:33.548716    7676 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 03:41:33.548850    7676 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 03:41:38.051355    7676 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.503431 seconds
	I0610 03:41:38.051420    7676 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 03:41:38.055105    7676 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 03:41:38.571784    7676 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 03:41:38.572068    7676 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-390000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 03:41:39.075469    7676 kubeadm.go:309] [bootstrap-token] Using token: 6it540.knfmkd9jycdzv6dz
	I0610 03:41:39.081994    7676 out.go:204]   - Configuring RBAC rules ...
	I0610 03:41:39.082049    7676 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 03:41:39.082089    7676 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 03:41:39.088979    7676 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 03:41:39.089810    7676 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 03:41:39.090714    7676 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 03:41:39.091515    7676 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 03:41:39.094742    7676 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 03:41:39.270464    7676 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 03:41:39.479089    7676 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 03:41:39.479672    7676 kubeadm.go:309] 
	I0610 03:41:39.479701    7676 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 03:41:39.479704    7676 kubeadm.go:309] 
	I0610 03:41:39.479745    7676 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 03:41:39.479758    7676 kubeadm.go:309] 
	I0610 03:41:39.479772    7676 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 03:41:39.479819    7676 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 03:41:39.479854    7676 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 03:41:39.479858    7676 kubeadm.go:309] 
	I0610 03:41:39.479892    7676 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 03:41:39.479894    7676 kubeadm.go:309] 
	I0610 03:41:39.479918    7676 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 03:41:39.479922    7676 kubeadm.go:309] 
	I0610 03:41:39.479953    7676 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 03:41:39.479996    7676 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 03:41:39.480033    7676 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 03:41:39.480036    7676 kubeadm.go:309] 
	I0610 03:41:39.480080    7676 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 03:41:39.480119    7676 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 03:41:39.480123    7676 kubeadm.go:309] 
	I0610 03:41:39.480162    7676 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6it540.knfmkd9jycdzv6dz \
	I0610 03:41:39.480210    7676 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f7fe6ae71b856fd6b6179c41fff2157e8fd728e5d925a1fc919a0499149ebdbb \
	I0610 03:41:39.480222    7676 kubeadm.go:309] 	--control-plane 
	I0610 03:41:39.480225    7676 kubeadm.go:309] 
	I0610 03:41:39.480273    7676 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 03:41:39.480276    7676 kubeadm.go:309] 
	I0610 03:41:39.480313    7676 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6it540.knfmkd9jycdzv6dz \
	I0610 03:41:39.480360    7676 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f7fe6ae71b856fd6b6179c41fff2157e8fd728e5d925a1fc919a0499149ebdbb 
	I0610 03:41:39.480542    7676 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 03:41:39.480550    7676 cni.go:84] Creating CNI manager for ""
	I0610 03:41:39.480557    7676 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:41:39.481918    7676 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 03:41:39.488708    7676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 03:41:39.491598    7676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 03:41:39.497462    7676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 03:41:39.497547    7676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-390000 minikube.k8s.io/updated_at=2024_06_10T03_41_39_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=stopped-upgrade-390000 minikube.k8s.io/primary=true
	I0610 03:41:39.497591    7676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 03:41:39.508634    7676 ops.go:34] apiserver oom_adj: -16
	I0610 03:41:39.541362    7676 kubeadm.go:1107] duration metric: took 43.80525ms to wait for elevateKubeSystemPrivileges
	W0610 03:41:39.541505    7676 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 03:41:39.541511    7676 kubeadm.go:393] duration metric: took 4m11.555618292s to StartCluster
	I0610 03:41:39.541521    7676 settings.go:142] acquiring lock: {Name:mke35f292ed93eff7117a159773dd0e114b7dd40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:41:39.541610    7676 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:41:39.542006    7676 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/kubeconfig: {Name:mke25032b58aa44d6357ccc49c0a5254f131209e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:41:39.542189    7676 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:41:39.546637    7676 out.go:177] * Verifying Kubernetes components...
	I0610 03:41:39.542231    7676 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 03:41:39.542282    7676 config.go:182] Loaded profile config "stopped-upgrade-390000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 03:41:39.554599    7676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 03:41:39.554602    7676 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-390000"
	I0610 03:41:39.554604    7676 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-390000"
	I0610 03:41:39.554616    7676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-390000"
	I0610 03:41:39.554618    7676 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-390000"
	W0610 03:41:39.554622    7676 addons.go:243] addon storage-provisioner should already be in state true
	I0610 03:41:39.554634    7676 host.go:66] Checking if "stopped-upgrade-390000" exists ...
	I0610 03:41:39.559641    7676 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 03:41:39.563540    7676 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 03:41:39.563546    7676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 03:41:39.563552    7676 sshutil.go:53] new ssh client: &{IP:localhost Port:51286 SSHKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/stopped-upgrade-390000/id_rsa Username:docker}
	I0610 03:41:39.564503    7676 kapi.go:59] client config for stopped-upgrade-390000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/stopped-upgrade-390000/client.key", CAFile:"/Users/jenkins/minikube-integration/19046-4812/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101f80460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 03:41:39.564631    7676 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-390000"
	W0610 03:41:39.564638    7676 addons.go:243] addon default-storageclass should already be in state true
	I0610 03:41:39.564648    7676 host.go:66] Checking if "stopped-upgrade-390000" exists ...
	I0610 03:41:39.565453    7676 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 03:41:39.565459    7676 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 03:41:39.565463    7676 sshutil.go:53] new ssh client: &{IP:localhost Port:51286 SSHKeyPath:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/stopped-upgrade-390000/id_rsa Username:docker}
	I0610 03:41:39.648031    7676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 03:41:39.653965    7676 api_server.go:52] waiting for apiserver process to appear ...
	I0610 03:41:39.654009    7676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 03:41:39.658409    7676 api_server.go:72] duration metric: took 116.207583ms to wait for apiserver process to appear ...
	I0610 03:41:39.658417    7676 api_server.go:88] waiting for apiserver healthz status ...
	I0610 03:41:39.658424    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:39.663581    7676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 03:41:39.731413    7676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 03:41:44.660646    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:44.660699    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:49.661216    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:49.661255    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:54.661696    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:54.661717    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:41:59.662276    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:41:59.662318    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:42:04.663143    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:42:04.663196    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:42:09.664128    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:42:09.664188    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0610 03:42:10.001760    7676 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0610 03:42:10.006153    7676 out.go:177] * Enabled addons: storage-provisioner
	I0610 03:42:10.015960    7676 addons.go:510] duration metric: took 30.473402125s for enable addons: enabled=[storage-provisioner]
	I0610 03:42:14.665414    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:42:14.665501    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:42:19.666216    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:42:19.666259    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:42:24.667056    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:42:24.667086    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:42:29.668956    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:42:29.669006    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:42:34.671487    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:42:34.671530    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:42:39.673852    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:42:39.674040    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:42:39.693174    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:42:39.693269    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:42:39.708969    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:42:39.709050    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:42:39.728735    7676 logs.go:276] 2 containers: [926c9b723398 fe11f6a3b740]
	I0610 03:42:39.728814    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:42:39.746558    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:42:39.746623    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:42:39.757273    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:42:39.757343    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:42:39.771805    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:42:39.771874    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:42:39.781697    7676 logs.go:276] 0 containers: []
	W0610 03:42:39.781709    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:42:39.781768    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:42:39.791712    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:42:39.791730    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:42:39.791736    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:42:39.825449    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:42:39.825458    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:42:39.836763    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:42:39.836774    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:42:39.853499    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:42:39.853515    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:42:39.866408    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:42:39.866421    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:42:39.877844    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:42:39.877856    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:42:39.889776    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:42:39.889786    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:42:39.907213    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:42:39.907223    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:42:39.918717    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:42:39.918727    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:42:39.922957    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:42:39.922966    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:42:39.961761    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:42:39.961772    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:42:39.976328    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:42:39.976338    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:42:39.990974    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:42:39.990985    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:42:42.516452    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:42:47.518693    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:42:47.518904    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:42:47.541087    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:42:47.541185    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:42:47.556353    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:42:47.556424    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:42:47.568502    7676 logs.go:276] 2 containers: [926c9b723398 fe11f6a3b740]
	I0610 03:42:47.568570    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:42:47.579821    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:42:47.579890    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:42:47.590203    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:42:47.590272    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:42:47.600648    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:42:47.600711    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:42:47.611703    7676 logs.go:276] 0 containers: []
	W0610 03:42:47.611715    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:42:47.611774    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:42:47.621857    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:42:47.621871    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:42:47.621877    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:42:47.626150    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:42:47.626156    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:42:47.640023    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:42:47.640037    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:42:47.651380    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:42:47.651394    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:42:47.662788    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:42:47.662798    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:42:47.676904    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:42:47.676916    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:42:47.693805    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:42:47.693815    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:42:47.705036    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:42:47.705046    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:42:47.729668    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:42:47.729676    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:42:47.763879    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:42:47.763886    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:42:47.798221    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:42:47.798236    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:42:47.812199    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:42:47.812211    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:42:47.828300    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:42:47.828313    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:42:50.342438    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:42:55.345156    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:42:55.345502    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:42:55.401956    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:42:55.402076    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:42:55.419117    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:42:55.419201    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:42:55.431999    7676 logs.go:276] 2 containers: [926c9b723398 fe11f6a3b740]
	I0610 03:42:55.432063    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:42:55.442923    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:42:55.442986    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:42:55.453496    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:42:55.453562    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:42:55.464630    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:42:55.464699    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:42:55.475263    7676 logs.go:276] 0 containers: []
	W0610 03:42:55.475276    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:42:55.475334    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:42:55.485221    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:42:55.485237    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:42:55.485244    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:42:55.496315    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:42:55.496328    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:42:55.500922    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:42:55.500932    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:42:55.515175    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:42:55.515190    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:42:55.537835    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:42:55.537849    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:42:55.549191    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:42:55.549202    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:42:55.573788    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:42:55.573796    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:42:55.585817    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:42:55.585827    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:42:55.603361    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:42:55.603371    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:42:55.635846    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:42:55.635853    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:42:55.676511    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:42:55.676524    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:42:55.692541    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:42:55.692552    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:42:55.706583    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:42:55.706595    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:42:58.220455    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:43:03.223215    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:43:03.223519    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:43:03.257916    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:43:03.258035    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:43:03.279249    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:43:03.279361    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:43:03.294239    7676 logs.go:276] 2 containers: [926c9b723398 fe11f6a3b740]
	I0610 03:43:03.294305    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:43:03.305965    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:43:03.306029    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:43:03.316495    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:43:03.316554    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:43:03.326940    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:43:03.327005    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:43:03.336948    7676 logs.go:276] 0 containers: []
	W0610 03:43:03.336963    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:43:03.337018    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:43:03.350867    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:43:03.350880    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:43:03.350886    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:43:03.385156    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:43:03.385168    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:43:03.400000    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:43:03.400011    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:43:03.411537    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:43:03.411551    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:43:03.435426    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:43:03.435436    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:43:03.451487    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:43:03.451499    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:43:03.484628    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:43:03.484636    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:43:03.489216    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:43:03.489224    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:43:03.503392    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:43:03.503401    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:43:03.514915    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:43:03.514926    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:43:03.531652    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:43:03.531662    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:43:03.544700    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:43:03.544711    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:43:03.558923    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:43:03.558933    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:43:06.073434    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:43:11.075983    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:43:11.076487    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:43:11.116375    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:43:11.116512    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:43:11.138457    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:43:11.138575    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:43:11.153731    7676 logs.go:276] 2 containers: [926c9b723398 fe11f6a3b740]
	I0610 03:43:11.153808    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:43:11.166285    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:43:11.166434    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:43:11.177378    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:43:11.177460    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:43:11.187829    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:43:11.187896    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:43:11.197951    7676 logs.go:276] 0 containers: []
	W0610 03:43:11.197964    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:43:11.198023    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:43:11.208355    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:43:11.208376    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:43:11.208382    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:43:11.226505    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:43:11.226515    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:43:11.242107    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:43:11.242117    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:43:11.259269    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:43:11.259279    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:43:11.270314    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:43:11.270329    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:43:11.281887    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:43:11.281897    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:43:11.293096    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:43:11.293106    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:43:11.309760    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:43:11.309770    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:43:11.333116    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:43:11.333127    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:43:11.366130    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:43:11.366141    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:43:11.370295    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:43:11.370303    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:43:11.403554    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:43:11.403563    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:43:11.415225    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:43:11.415240    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:43:13.929473    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:43:18.932215    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:43:18.932545    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:43:18.972003    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:43:18.972136    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:43:18.994844    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:43:18.994956    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:43:19.009788    7676 logs.go:276] 2 containers: [926c9b723398 fe11f6a3b740]
	I0610 03:43:19.009860    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:43:19.022410    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:43:19.022475    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:43:19.036458    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:43:19.036518    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:43:19.046728    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:43:19.046798    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:43:19.057192    7676 logs.go:276] 0 containers: []
	W0610 03:43:19.057203    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:43:19.057255    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:43:19.067772    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:43:19.067786    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:43:19.067791    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:43:19.081445    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:43:19.081459    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:43:19.094354    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:43:19.094368    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:43:19.105832    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:43:19.105842    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:43:19.117166    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:43:19.117178    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:43:19.152019    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:43:19.152026    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:43:19.165690    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:43:19.165700    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:43:19.177752    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:43:19.177765    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:43:19.192409    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:43:19.192423    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:43:19.203973    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:43:19.203983    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:43:19.221156    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:43:19.221168    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:43:19.246458    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:43:19.246465    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:43:19.250426    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:43:19.250432    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:43:21.785582    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:43:26.788412    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:43:26.788940    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:43:26.829070    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:43:26.829202    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:43:26.853634    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:43:26.853737    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:43:26.868430    7676 logs.go:276] 2 containers: [926c9b723398 fe11f6a3b740]
	I0610 03:43:26.868500    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:43:26.880541    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:43:26.880602    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:43:26.891478    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:43:26.891538    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:43:26.902477    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:43:26.902537    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:43:26.913044    7676 logs.go:276] 0 containers: []
	W0610 03:43:26.913056    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:43:26.913116    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:43:26.923120    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:43:26.923135    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:43:26.923141    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:43:26.935088    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:43:26.935101    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:43:26.958551    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:43:26.958561    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:43:26.969690    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:43:26.969701    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:43:26.974139    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:43:26.974145    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:43:27.013495    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:43:27.013506    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:43:27.027988    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:43:27.028002    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:43:27.039318    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:43:27.039331    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:43:27.051033    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:43:27.051043    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:43:27.083634    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:43:27.083640    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:43:27.097504    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:43:27.097515    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:43:27.109462    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:43:27.109472    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:43:27.124281    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:43:27.124293    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:43:29.644431    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:43:34.646911    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:43:34.647266    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:43:34.682494    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:43:34.682629    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:43:34.709602    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:43:34.709687    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:43:34.724029    7676 logs.go:276] 2 containers: [926c9b723398 fe11f6a3b740]
	I0610 03:43:34.724103    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:43:34.735161    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:43:34.735236    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:43:34.745432    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:43:34.745501    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:43:34.755626    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:43:34.755692    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:43:34.766072    7676 logs.go:276] 0 containers: []
	W0610 03:43:34.766084    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:43:34.766139    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:43:34.776553    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:43:34.776568    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:43:34.776573    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:43:34.813704    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:43:34.813718    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:43:34.827890    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:43:34.827902    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:43:34.839690    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:43:34.839703    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:43:34.855734    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:43:34.855745    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:43:34.873276    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:43:34.873288    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:43:34.884807    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:43:34.884821    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:43:34.896826    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:43:34.896839    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:43:34.931456    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:43:34.931463    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:43:34.935549    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:43:34.935558    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:43:34.949043    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:43:34.949055    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:43:34.960593    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:43:34.960603    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:43:34.972432    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:43:34.972443    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:43:37.498400    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:43:42.501302    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:43:42.501489    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:43:42.518439    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:43:42.518521    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:43:42.532893    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:43:42.532970    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:43:42.545249    7676 logs.go:276] 2 containers: [926c9b723398 fe11f6a3b740]
	I0610 03:43:42.545326    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:43:42.555431    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:43:42.555491    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:43:42.565728    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:43:42.565793    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:43:42.576377    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:43:42.576444    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:43:42.586358    7676 logs.go:276] 0 containers: []
	W0610 03:43:42.586368    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:43:42.586415    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:43:42.597017    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:43:42.597032    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:43:42.597038    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:43:42.611326    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:43:42.611335    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:43:42.634386    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:43:42.634393    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:43:42.648362    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:43:42.648375    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:43:42.661750    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:43:42.661765    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:43:42.673243    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:43:42.673256    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:43:42.684637    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:43:42.684649    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:43:42.696041    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:43:42.696055    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:43:42.713362    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:43:42.713373    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:43:42.724896    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:43:42.724909    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:43:42.736338    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:43:42.736349    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:43:42.769619    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:43:42.769626    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:43:42.774071    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:43:42.774079    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:43:45.309783    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:43:50.312543    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:43:50.312933    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:43:50.352195    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:43:50.352329    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:43:50.373338    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:43:50.373448    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:43:50.390712    7676 logs.go:276] 2 containers: [926c9b723398 fe11f6a3b740]
	I0610 03:43:50.390786    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:43:50.402989    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:43:50.403059    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:43:50.414341    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:43:50.414411    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:43:50.425377    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:43:50.425447    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:43:50.443410    7676 logs.go:276] 0 containers: []
	W0610 03:43:50.443427    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:43:50.443486    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:43:50.458778    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:43:50.458795    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:43:50.458801    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:43:50.492664    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:43:50.492675    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:43:50.507908    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:43:50.507922    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:43:50.521898    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:43:50.521911    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:43:50.534055    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:43:50.534064    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:43:50.546162    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:43:50.546175    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:43:50.561128    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:43:50.561142    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:43:50.595235    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:43:50.595242    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:43:50.599106    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:43:50.599114    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:43:50.610568    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:43:50.610578    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:43:50.634985    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:43:50.634994    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:43:50.645989    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:43:50.646001    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:43:50.657963    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:43:50.657976    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:43:53.175948    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:43:58.177093    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:43:58.177433    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:43:58.209455    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:43:58.209581    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:43:58.228580    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:43:58.228670    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:43:58.242653    7676 logs.go:276] 4 containers: [0a393e3efa85 97314a60d54c 926c9b723398 fe11f6a3b740]
	I0610 03:43:58.242735    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:43:58.254386    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:43:58.254461    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:43:58.264839    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:43:58.264905    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:43:58.275318    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:43:58.275388    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:43:58.286657    7676 logs.go:276] 0 containers: []
	W0610 03:43:58.286670    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:43:58.286732    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:43:58.297168    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:43:58.297185    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:43:58.297191    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:43:58.331852    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:43:58.331864    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:43:58.335893    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:43:58.335902    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:43:58.347345    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:43:58.347357    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:43:58.361163    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:43:58.361173    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:43:58.378510    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:43:58.378519    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:43:58.390275    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:43:58.390285    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:43:58.430486    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:43:58.430497    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:43:58.444880    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:43:58.444891    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:43:58.458878    7676 logs.go:123] Gathering logs for coredns [0a393e3efa85] ...
	I0610 03:43:58.458887    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a393e3efa85"
	I0610 03:43:58.470106    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:43:58.470120    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:43:58.481780    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:43:58.481789    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:43:58.497005    7676 logs.go:123] Gathering logs for coredns [97314a60d54c] ...
	I0610 03:43:58.497014    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97314a60d54c"
	I0610 03:43:58.508262    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:43:58.508277    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:43:58.520252    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:43:58.520261    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:44:01.046971    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:44:06.049731    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:44:06.049964    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:44:06.077874    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:44:06.077972    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:44:06.093054    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:44:06.093125    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:44:06.105582    7676 logs.go:276] 4 containers: [0a393e3efa85 97314a60d54c 926c9b723398 fe11f6a3b740]
	I0610 03:44:06.105651    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:44:06.115941    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:44:06.116005    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:44:06.125916    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:44:06.125986    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:44:06.136339    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:44:06.136405    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:44:06.145969    7676 logs.go:276] 0 containers: []
	W0610 03:44:06.145979    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:44:06.146029    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:44:06.156809    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:44:06.156828    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:44:06.156833    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:44:06.190151    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:44:06.190158    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:44:06.204336    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:44:06.204345    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:44:06.222030    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:44:06.222042    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:44:06.247174    7676 logs.go:123] Gathering logs for coredns [0a393e3efa85] ...
	I0610 03:44:06.247180    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a393e3efa85"
	I0610 03:44:06.258617    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:44:06.258631    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:44:06.269660    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:44:06.269673    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:44:06.306305    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:44:06.306319    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:44:06.320578    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:44:06.320592    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:44:06.332443    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:44:06.332455    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:44:06.343860    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:44:06.343873    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:44:06.348137    7676 logs.go:123] Gathering logs for coredns [97314a60d54c] ...
	I0610 03:44:06.348148    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97314a60d54c"
	I0610 03:44:06.359328    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:44:06.359338    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:44:06.370888    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:44:06.370900    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:44:06.388232    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:44:06.388246    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:44:08.902233    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:44:13.905228    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:44:13.905695    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:44:13.947907    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:44:13.948031    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:44:13.969091    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:44:13.969199    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:44:13.985124    7676 logs.go:276] 4 containers: [0a393e3efa85 97314a60d54c 926c9b723398 fe11f6a3b740]
	I0610 03:44:13.985205    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:44:14.001585    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:44:14.001658    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:44:14.014274    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:44:14.014356    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:44:14.025989    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:44:14.026055    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:44:14.036320    7676 logs.go:276] 0 containers: []
	W0610 03:44:14.036332    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:44:14.036389    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:44:14.046715    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:44:14.046731    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:44:14.046736    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:44:14.081095    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:44:14.081109    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:44:14.099875    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:44:14.099887    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:44:14.103939    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:44:14.103948    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:44:14.117581    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:44:14.117594    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:44:14.135793    7676 logs.go:123] Gathering logs for coredns [0a393e3efa85] ...
	I0610 03:44:14.135803    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a393e3efa85"
	I0610 03:44:14.147702    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:44:14.147714    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:44:14.160164    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:44:14.160176    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:44:14.172747    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:44:14.172758    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:44:14.185270    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:44:14.185281    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:44:14.198324    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:44:14.198336    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:44:14.210987    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:44:14.211000    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:44:14.236044    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:44:14.236059    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:44:14.270486    7676 logs.go:123] Gathering logs for coredns [97314a60d54c] ...
	I0610 03:44:14.270504    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97314a60d54c"
	I0610 03:44:14.283144    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:44:14.283155    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:44:16.803941    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:44:21.806273    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:44:21.806769    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:44:21.849722    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:44:21.849857    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:44:21.871360    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:44:21.871469    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:44:21.886346    7676 logs.go:276] 4 containers: [0a393e3efa85 97314a60d54c 926c9b723398 fe11f6a3b740]
	I0610 03:44:21.886424    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:44:21.898437    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:44:21.898500    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:44:21.910435    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:44:21.910502    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:44:21.920917    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:44:21.920989    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:44:21.934572    7676 logs.go:276] 0 containers: []
	W0610 03:44:21.934586    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:44:21.934643    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:44:21.945084    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:44:21.945104    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:44:21.945109    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:44:21.965230    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:44:21.965242    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:44:21.991068    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:44:21.991079    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:44:22.003408    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:44:22.003420    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:44:22.007467    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:44:22.007477    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:44:22.041253    7676 logs.go:123] Gathering logs for coredns [0a393e3efa85] ...
	I0610 03:44:22.041267    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a393e3efa85"
	I0610 03:44:22.058487    7676 logs.go:123] Gathering logs for coredns [97314a60d54c] ...
	I0610 03:44:22.058496    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97314a60d54c"
	I0610 03:44:22.071297    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:44:22.071311    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:44:22.083476    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:44:22.083487    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:44:22.098154    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:44:22.098167    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:44:22.116017    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:44:22.116027    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:44:22.150025    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:44:22.150031    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:44:22.164435    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:44:22.164443    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:44:22.182268    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:44:22.182279    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:44:22.194316    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:44:22.194325    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:44:24.707659    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:44:29.710101    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:44:29.710325    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:44:29.734784    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:44:29.734894    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:44:29.750321    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:44:29.750392    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:44:29.762817    7676 logs.go:276] 4 containers: [0a393e3efa85 97314a60d54c 926c9b723398 fe11f6a3b740]
	I0610 03:44:29.762884    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:44:29.773261    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:44:29.773320    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:44:29.785233    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:44:29.785294    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:44:29.798822    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:44:29.798881    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:44:29.810102    7676 logs.go:276] 0 containers: []
	W0610 03:44:29.810115    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:44:29.810166    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:44:29.827905    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:44:29.827921    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:44:29.827926    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:44:29.839545    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:44:29.839557    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:44:29.864418    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:44:29.864428    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:44:29.879611    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:44:29.879624    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:44:29.893924    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:44:29.893939    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:44:29.905230    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:44:29.905242    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:44:29.909796    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:44:29.909805    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:44:29.944553    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:44:29.944565    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:44:29.958688    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:44:29.958699    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:44:29.970612    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:44:29.970627    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:44:29.988165    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:44:29.988174    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:44:29.999765    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:44:29.999779    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:44:30.034043    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:44:30.034050    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:44:30.047805    7676 logs.go:123] Gathering logs for coredns [0a393e3efa85] ...
	I0610 03:44:30.047816    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a393e3efa85"
	I0610 03:44:30.059651    7676 logs.go:123] Gathering logs for coredns [97314a60d54c] ...
	I0610 03:44:30.059664    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97314a60d54c"
	I0610 03:44:32.573289    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:44:37.576188    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:44:37.576643    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:44:37.617638    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:44:37.617767    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:44:37.642400    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:44:37.642492    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:44:37.656876    7676 logs.go:276] 4 containers: [0a393e3efa85 97314a60d54c 926c9b723398 fe11f6a3b740]
	I0610 03:44:37.656941    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:44:37.668564    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:44:37.668628    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:44:37.679417    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:44:37.679492    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:44:37.689834    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:44:37.689890    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:44:37.699923    7676 logs.go:276] 0 containers: []
	W0610 03:44:37.699935    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:44:37.699983    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:44:37.710711    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:44:37.710726    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:44:37.710733    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:44:37.728845    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:44:37.728857    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:44:37.741257    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:44:37.741270    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:44:37.753414    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:44:37.753428    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:44:37.788575    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:44:37.788588    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:44:37.802555    7676 logs.go:123] Gathering logs for coredns [97314a60d54c] ...
	I0610 03:44:37.802568    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97314a60d54c"
	I0610 03:44:37.820280    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:44:37.820292    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:44:37.837177    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:44:37.837190    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:44:37.850592    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:44:37.850605    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:44:37.854812    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:44:37.854820    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:44:37.869038    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:44:37.869050    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:44:37.902227    7676 logs.go:123] Gathering logs for coredns [0a393e3efa85] ...
	I0610 03:44:37.902235    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a393e3efa85"
	I0610 03:44:37.914110    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:44:37.914122    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:44:37.925778    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:44:37.925790    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:44:37.937319    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:44:37.937332    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:44:40.464931    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:44:45.467301    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:44:45.467731    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:44:45.507983    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:44:45.508107    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:44:45.539579    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:44:45.539664    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:44:45.552680    7676 logs.go:276] 4 containers: [0a393e3efa85 97314a60d54c 926c9b723398 fe11f6a3b740]
	I0610 03:44:45.552760    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:44:45.563353    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:44:45.563424    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:44:45.574686    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:44:45.574760    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:44:45.585422    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:44:45.585482    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:44:45.595877    7676 logs.go:276] 0 containers: []
	W0610 03:44:45.595887    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:44:45.595935    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:44:45.606708    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:44:45.606725    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:44:45.606730    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:44:45.618578    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:44:45.618591    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:44:45.623177    7676 logs.go:123] Gathering logs for coredns [0a393e3efa85] ...
	I0610 03:44:45.623184    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a393e3efa85"
	I0610 03:44:45.636147    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:44:45.636160    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:44:45.648225    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:44:45.648239    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:44:45.665652    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:44:45.665661    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:44:45.677769    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:44:45.677780    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:44:45.703945    7676 logs.go:123] Gathering logs for coredns [97314a60d54c] ...
	I0610 03:44:45.703952    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97314a60d54c"
	I0610 03:44:45.717787    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:44:45.717797    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:44:45.732332    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:44:45.732342    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:44:45.766732    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:44:45.766738    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:44:45.801525    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:44:45.801535    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:44:45.813390    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:44:45.813401    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:44:45.827349    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:44:45.827359    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:44:45.841511    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:44:45.841521    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:44:48.355558    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:44:53.357966    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:44:53.358031    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:44:53.368960    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:44:53.369028    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:44:53.379172    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:44:53.379243    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:44:53.389777    7676 logs.go:276] 4 containers: [0a393e3efa85 97314a60d54c 926c9b723398 fe11f6a3b740]
	I0610 03:44:53.389844    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:44:53.400497    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:44:53.400576    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:44:53.412236    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:44:53.412303    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:44:53.427309    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:44:53.427355    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:44:53.437908    7676 logs.go:276] 0 containers: []
	W0610 03:44:53.437919    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:44:53.437962    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:44:53.449718    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:44:53.449738    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:44:53.449743    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:44:53.467543    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:44:53.467553    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:44:53.507050    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:44:53.507063    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:44:53.521342    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:44:53.521353    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:44:53.535778    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:44:53.535791    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:44:53.570378    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:44:53.570385    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:44:53.574369    7676 logs.go:123] Gathering logs for coredns [0a393e3efa85] ...
	I0610 03:44:53.574377    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a393e3efa85"
	I0610 03:44:53.585913    7676 logs.go:123] Gathering logs for coredns [97314a60d54c] ...
	I0610 03:44:53.585927    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97314a60d54c"
	I0610 03:44:53.598027    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:44:53.598039    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:44:53.609927    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:44:53.609939    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:44:53.622291    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:44:53.622305    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:44:53.642000    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:44:53.642014    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:44:53.653951    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:44:53.653961    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:44:53.665407    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:44:53.665417    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:44:53.690125    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:44:53.690131    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:44:56.203833    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:45:01.204852    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:45:01.204986    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:45:01.222558    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:45:01.222633    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:45:01.236901    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:45:01.236971    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:45:01.249668    7676 logs.go:276] 4 containers: [0a393e3efa85 97314a60d54c 926c9b723398 fe11f6a3b740]
	I0610 03:45:01.249727    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:45:01.259820    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:45:01.259891    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:45:01.269673    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:45:01.269737    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:45:01.279685    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:45:01.279748    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:45:01.289706    7676 logs.go:276] 0 containers: []
	W0610 03:45:01.289716    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:45:01.289766    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:45:01.304167    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:45:01.304184    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:45:01.304189    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:45:01.315200    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:45:01.315210    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:45:01.326881    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:45:01.326894    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:45:01.338286    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:45:01.338299    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:45:01.353567    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:45:01.353579    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:45:01.367757    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:45:01.367767    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:45:01.381986    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:45:01.381999    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:45:01.400102    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:45:01.400114    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:45:01.434493    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:45:01.434503    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:45:01.438916    7676 logs.go:123] Gathering logs for coredns [0a393e3efa85] ...
	I0610 03:45:01.438924    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a393e3efa85"
	I0610 03:45:01.450181    7676 logs.go:123] Gathering logs for coredns [97314a60d54c] ...
	I0610 03:45:01.450194    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97314a60d54c"
	I0610 03:45:01.461607    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:45:01.461621    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:45:01.502700    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:45:01.502716    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:45:01.516833    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:45:01.516846    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:45:01.528290    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:45:01.528301    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:45:04.053687    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:45:09.056006    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:45:09.056442    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:45:09.100000    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:45:09.100121    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:45:09.123194    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:45:09.123272    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:45:09.138555    7676 logs.go:276] 4 containers: [0a393e3efa85 97314a60d54c 926c9b723398 fe11f6a3b740]
	I0610 03:45:09.138631    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:45:09.150279    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:45:09.150349    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:45:09.163408    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:45:09.163473    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:45:09.174251    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:45:09.174322    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:45:09.185097    7676 logs.go:276] 0 containers: []
	W0610 03:45:09.185110    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:45:09.185171    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:45:09.195555    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:45:09.195571    7676 logs.go:123] Gathering logs for coredns [0a393e3efa85] ...
	I0610 03:45:09.195577    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a393e3efa85"
	I0610 03:45:09.207520    7676 logs.go:123] Gathering logs for coredns [97314a60d54c] ...
	I0610 03:45:09.207532    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97314a60d54c"
	I0610 03:45:09.219512    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:45:09.219524    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:45:09.231500    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:45:09.231511    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:45:09.246128    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:45:09.246141    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:45:09.250635    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:45:09.250644    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:45:09.263103    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:45:09.263116    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:45:09.282526    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:45:09.282535    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:45:09.299758    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:45:09.299769    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:45:09.311406    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:45:09.311418    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:45:09.323277    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:45:09.323289    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:45:09.356483    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:45:09.356494    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:45:09.370791    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:45:09.370802    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:45:09.384732    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:45:09.384742    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:45:09.408859    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:45:09.408867    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:45:11.946337    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:45:16.948454    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:45:16.948521    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:45:16.961241    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:45:16.961294    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:45:16.973581    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:45:16.973651    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:45:16.987294    7676 logs.go:276] 4 containers: [0a393e3efa85 97314a60d54c 926c9b723398 fe11f6a3b740]
	I0610 03:45:16.987357    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:45:17.005461    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:45:17.005515    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:45:17.021867    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:45:17.021924    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:45:17.035349    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:45:17.035401    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:45:17.046739    7676 logs.go:276] 0 containers: []
	W0610 03:45:17.046753    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:45:17.046800    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:45:17.058519    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:45:17.058538    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:45:17.058545    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:45:17.063281    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:45:17.063292    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:45:17.101492    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:45:17.101505    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:45:17.117781    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:45:17.117800    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:45:17.132103    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:45:17.132115    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:45:17.144950    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:45:17.144961    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:45:17.169359    7676 logs.go:123] Gathering logs for coredns [0a393e3efa85] ...
	I0610 03:45:17.169369    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a393e3efa85"
	I0610 03:45:17.181979    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:45:17.181989    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:45:17.213612    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:45:17.213629    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:45:17.231247    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:45:17.231270    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:45:17.248200    7676 logs.go:123] Gathering logs for coredns [97314a60d54c] ...
	I0610 03:45:17.248212    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97314a60d54c"
	I0610 03:45:17.263757    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:45:17.263769    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:45:17.277375    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:45:17.277386    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:45:17.292047    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:45:17.292058    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:45:17.326763    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:45:17.326782    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:45:19.847335    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:45:24.849802    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:45:24.850210    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:45:24.884538    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:45:24.884658    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:45:24.905056    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:45:24.905153    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:45:24.928370    7676 logs.go:276] 4 containers: [0a393e3efa85 97314a60d54c 926c9b723398 fe11f6a3b740]
	I0610 03:45:24.928436    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:45:24.941562    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:45:24.941625    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:45:24.952816    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:45:24.952866    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:45:24.964420    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:45:24.964462    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:45:24.976168    7676 logs.go:276] 0 containers: []
	W0610 03:45:24.976180    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:45:24.976230    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:45:24.989347    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:45:24.989369    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:45:24.989376    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:45:25.006366    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:45:25.006387    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:45:25.020947    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:45:25.020959    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:45:25.035194    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:45:25.035212    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:45:25.050415    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:45:25.050429    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:45:25.064989    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:45:25.065002    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:45:25.091212    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:45:25.091232    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:45:25.105306    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:45:25.105319    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:45:25.141452    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:45:25.141462    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:45:25.146349    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:45:25.146356    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:45:25.184605    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:45:25.184619    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:45:25.200031    7676 logs.go:123] Gathering logs for coredns [97314a60d54c] ...
	I0610 03:45:25.200044    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97314a60d54c"
	I0610 03:45:25.216472    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:45:25.216485    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:45:25.231115    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:45:25.231123    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:45:25.252078    7676 logs.go:123] Gathering logs for coredns [0a393e3efa85] ...
	I0610 03:45:25.252091    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a393e3efa85"
	I0610 03:45:27.765661    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:45:32.767928    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:45:32.768107    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0610 03:45:32.782975    7676 logs.go:276] 1 containers: [0c574fa835e1]
	I0610 03:45:32.783055    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0610 03:45:32.794924    7676 logs.go:276] 1 containers: [66dff392b6f8]
	I0610 03:45:32.795003    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0610 03:45:32.805898    7676 logs.go:276] 4 containers: [0a393e3efa85 97314a60d54c 926c9b723398 fe11f6a3b740]
	I0610 03:45:32.805976    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0610 03:45:32.817292    7676 logs.go:276] 1 containers: [df1f6b38e3c4]
	I0610 03:45:32.817358    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0610 03:45:32.828596    7676 logs.go:276] 1 containers: [130de1241e80]
	I0610 03:45:32.828661    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0610 03:45:32.839598    7676 logs.go:276] 1 containers: [77932b72c45e]
	I0610 03:45:32.839665    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0610 03:45:32.852936    7676 logs.go:276] 0 containers: []
	W0610 03:45:32.852949    7676 logs.go:278] No container was found matching "kindnet"
	I0610 03:45:32.853010    7676 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0610 03:45:32.863967    7676 logs.go:276] 1 containers: [77a0264e1a31]
	I0610 03:45:32.863985    7676 logs.go:123] Gathering logs for kube-controller-manager [77932b72c45e] ...
	I0610 03:45:32.863990    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77932b72c45e"
	I0610 03:45:32.882333    7676 logs.go:123] Gathering logs for storage-provisioner [77a0264e1a31] ...
	I0610 03:45:32.882344    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 77a0264e1a31"
	I0610 03:45:32.898648    7676 logs.go:123] Gathering logs for container status ...
	I0610 03:45:32.898662    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 03:45:32.921957    7676 logs.go:123] Gathering logs for describe nodes ...
	I0610 03:45:32.921970    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 03:45:32.958191    7676 logs.go:123] Gathering logs for coredns [0a393e3efa85] ...
	I0610 03:45:32.958206    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a393e3efa85"
	I0610 03:45:32.971312    7676 logs.go:123] Gathering logs for kube-scheduler [df1f6b38e3c4] ...
	I0610 03:45:32.971326    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df1f6b38e3c4"
	I0610 03:45:32.986131    7676 logs.go:123] Gathering logs for kube-proxy [130de1241e80] ...
	I0610 03:45:32.986144    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 130de1241e80"
	I0610 03:45:32.999152    7676 logs.go:123] Gathering logs for coredns [926c9b723398] ...
	I0610 03:45:32.999166    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 926c9b723398"
	I0610 03:45:33.013249    7676 logs.go:123] Gathering logs for coredns [fe11f6a3b740] ...
	I0610 03:45:33.013261    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe11f6a3b740"
	I0610 03:45:33.026087    7676 logs.go:123] Gathering logs for etcd [66dff392b6f8] ...
	I0610 03:45:33.026098    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66dff392b6f8"
	I0610 03:45:33.040703    7676 logs.go:123] Gathering logs for coredns [97314a60d54c] ...
	I0610 03:45:33.040712    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97314a60d54c"
	I0610 03:45:33.053537    7676 logs.go:123] Gathering logs for Docker ...
	I0610 03:45:33.053549    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0610 03:45:33.078709    7676 logs.go:123] Gathering logs for kubelet ...
	I0610 03:45:33.078718    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 03:45:33.112855    7676 logs.go:123] Gathering logs for dmesg ...
	I0610 03:45:33.112861    7676 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 03:45:33.116687    7676 logs.go:123] Gathering logs for kube-apiserver [0c574fa835e1] ...
	I0610 03:45:33.116692    7676 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c574fa835e1"
	I0610 03:45:35.633767    7676 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0610 03:45:40.634528    7676 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0610 03:45:40.638237    7676 out.go:177] 
	W0610 03:45:40.642297    7676 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0610 03:45:40.642306    7676 out.go:239] * 
	* 
	W0610 03:45:40.642732    7676 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:45:40.658245    7676 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-390000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (577.63s)

                                                
                                    
x
+
TestPause/serial/Start (9.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-992000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-992000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.92642925s)

                                                
                                                
-- stdout --
	* [pause-992000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-992000" primary control-plane node in "pause-992000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-992000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-992000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-992000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-992000 -n pause-992000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-992000 -n pause-992000: exit status 7 (54.980375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-992000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-168000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-168000 --driver=qemu2 : exit status 80 (9.813274167s)

                                                
                                                
-- stdout --
	* [NoKubernetes-168000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-168000" primary control-plane node in "NoKubernetes-168000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-168000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-168000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-168000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-168000 -n NoKubernetes-168000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-168000 -n NoKubernetes-168000: exit status 7 (66.773667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-168000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-168000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-168000 --no-kubernetes --driver=qemu2 : exit status 80 (5.394663833s)

                                                
                                                
-- stdout --
	* [NoKubernetes-168000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-168000
	* Restarting existing qemu2 VM for "NoKubernetes-168000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-168000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-168000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-168000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-168000 -n NoKubernetes-168000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-168000 -n NoKubernetes-168000: exit status 7 (40.721042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-168000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-168000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-168000 --no-kubernetes --driver=qemu2 : exit status 80 (5.378622584s)

                                                
                                                
-- stdout --
	* [NoKubernetes-168000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-168000
	* Restarting existing qemu2 VM for "NoKubernetes-168000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-168000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-168000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-168000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-168000 -n NoKubernetes-168000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-168000 -n NoKubernetes-168000: exit status 7 (38.621833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-168000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-168000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-168000 --driver=qemu2 : exit status 80 (5.416385375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-168000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-168000
	* Restarting existing qemu2 VM for "NoKubernetes-168000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-168000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-168000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-168000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-168000 -n NoKubernetes-168000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-168000 -n NoKubernetes-168000: exit status 7 (51.438084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-168000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (10.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (10.009717959s)

                                                
                                                
-- stdout --
	* [auto-811000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-811000" primary control-plane node in "auto-811000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:43:52.200732    7920 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:43:52.200859    7920 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:43:52.200865    7920 out.go:304] Setting ErrFile to fd 2...
	I0610 03:43:52.200868    7920 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:43:52.201007    7920 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:43:52.202114    7920 out.go:298] Setting JSON to false
	I0610 03:43:52.219123    7920 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6203,"bootTime":1718010029,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:43:52.219215    7920 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:43:52.225589    7920 out.go:177] * [auto-811000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:43:52.232623    7920 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:43:52.236570    7920 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:43:52.232716    7920 notify.go:220] Checking for updates...
	I0610 03:43:52.242570    7920 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:43:52.245557    7920 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:43:52.248572    7920 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:43:52.251574    7920 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:43:52.254945    7920 config.go:182] Loaded profile config "multinode-763000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:43:52.255009    7920 config.go:182] Loaded profile config "stopped-upgrade-390000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 03:43:52.255060    7920 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:43:52.258562    7920 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:43:52.265572    7920 start.go:297] selected driver: qemu2
	I0610 03:43:52.265579    7920 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:43:52.265584    7920 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:43:52.267769    7920 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:43:52.269271    7920 out.go:177] * Automatically selected the socket_vmnet network
	I0610 03:43:52.272644    7920 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:43:52.272661    7920 cni.go:84] Creating CNI manager for ""
	I0610 03:43:52.272671    7920 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:43:52.272675    7920 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 03:43:52.272713    7920 start.go:340] cluster config:
	{Name:auto-811000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:auto-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:43:52.276897    7920 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:43:52.283496    7920 out.go:177] * Starting "auto-811000" primary control-plane node in "auto-811000" cluster
	I0610 03:43:52.287539    7920 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:43:52.287554    7920 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:43:52.287566    7920 cache.go:56] Caching tarball of preloaded images
	I0610 03:43:52.287641    7920 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:43:52.287646    7920 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:43:52.287712    7920 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/auto-811000/config.json ...
	I0610 03:43:52.287722    7920 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/auto-811000/config.json: {Name:mkc404ee50843d75a94240daba00fa17ef73b647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:43:52.288111    7920 start.go:360] acquireMachinesLock for auto-811000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:43:52.288146    7920 start.go:364] duration metric: took 29.334µs to acquireMachinesLock for "auto-811000"
	I0610 03:43:52.288156    7920 start.go:93] Provisioning new machine with config: &{Name:auto-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:auto-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:43:52.288191    7920 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:43:52.292548    7920 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 03:43:52.307980    7920 start.go:159] libmachine.API.Create for "auto-811000" (driver="qemu2")
	I0610 03:43:52.308004    7920 client.go:168] LocalClient.Create starting
	I0610 03:43:52.308060    7920 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:43:52.308089    7920 main.go:141] libmachine: Decoding PEM data...
	I0610 03:43:52.308099    7920 main.go:141] libmachine: Parsing certificate...
	I0610 03:43:52.308142    7920 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:43:52.308164    7920 main.go:141] libmachine: Decoding PEM data...
	I0610 03:43:52.308174    7920 main.go:141] libmachine: Parsing certificate...
	I0610 03:43:52.308616    7920 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:43:52.456585    7920 main.go:141] libmachine: Creating SSH key...
	I0610 03:43:52.599179    7920 main.go:141] libmachine: Creating Disk image...
	I0610 03:43:52.599191    7920 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:43:52.599412    7920 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/auto-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/auto-811000/disk.qcow2
	I0610 03:43:52.612516    7920 main.go:141] libmachine: STDOUT: 
	I0610 03:43:52.612534    7920 main.go:141] libmachine: STDERR: 
	I0610 03:43:52.612590    7920 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/auto-811000/disk.qcow2 +20000M
	I0610 03:43:52.623637    7920 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:43:52.623654    7920 main.go:141] libmachine: STDERR: 
	I0610 03:43:52.623674    7920 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/auto-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/auto-811000/disk.qcow2
	I0610 03:43:52.623679    7920 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:43:52.623706    7920 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/auto-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/auto-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/auto-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:ba:c8:cb:b7:01 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/auto-811000/disk.qcow2
	I0610 03:43:52.625453    7920 main.go:141] libmachine: STDOUT: 
	I0610 03:43:52.625465    7920 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:43:52.625486    7920 client.go:171] duration metric: took 317.472208ms to LocalClient.Create
	I0610 03:43:54.627657    7920 start.go:128] duration metric: took 2.339425084s to createHost
	I0610 03:43:54.627695    7920 start.go:83] releasing machines lock for "auto-811000", held for 2.339518042s
	W0610 03:43:54.627759    7920 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:43:54.641115    7920 out.go:177] * Deleting "auto-811000" in qemu2 ...
	W0610 03:43:54.660600    7920 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:43:54.660616    7920 start.go:728] Will try again in 5 seconds ...
	I0610 03:43:59.662809    7920 start.go:360] acquireMachinesLock for auto-811000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:43:59.663083    7920 start.go:364] duration metric: took 201.625µs to acquireMachinesLock for "auto-811000"
	I0610 03:43:59.663114    7920 start.go:93] Provisioning new machine with config: &{Name:auto-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:auto-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:43:59.663203    7920 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:43:59.678614    7920 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 03:43:59.703853    7920 start.go:159] libmachine.API.Create for "auto-811000" (driver="qemu2")
	I0610 03:43:59.703890    7920 client.go:168] LocalClient.Create starting
	I0610 03:43:59.703973    7920 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:43:59.704028    7920 main.go:141] libmachine: Decoding PEM data...
	I0610 03:43:59.704041    7920 main.go:141] libmachine: Parsing certificate...
	I0610 03:43:59.704089    7920 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:43:59.704129    7920 main.go:141] libmachine: Decoding PEM data...
	I0610 03:43:59.704138    7920 main.go:141] libmachine: Parsing certificate...
	I0610 03:43:59.704617    7920 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:43:59.850920    7920 main.go:141] libmachine: Creating SSH key...
	I0610 03:44:00.104196    7920 main.go:141] libmachine: Creating Disk image...
	I0610 03:44:00.104206    7920 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:44:00.104440    7920 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/auto-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/auto-811000/disk.qcow2
	I0610 03:44:00.117369    7920 main.go:141] libmachine: STDOUT: 
	I0610 03:44:00.117393    7920 main.go:141] libmachine: STDERR: 
	I0610 03:44:00.117462    7920 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/auto-811000/disk.qcow2 +20000M
	I0610 03:44:00.129017    7920 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:44:00.129033    7920 main.go:141] libmachine: STDERR: 
	I0610 03:44:00.129060    7920 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/auto-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/auto-811000/disk.qcow2
	I0610 03:44:00.129065    7920 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:44:00.129092    7920 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/auto-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/auto-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/auto-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:5c:54:ec:c4:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/auto-811000/disk.qcow2
	I0610 03:44:00.130835    7920 main.go:141] libmachine: STDOUT: 
	I0610 03:44:00.130849    7920 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:44:00.130863    7920 client.go:171] duration metric: took 426.964ms to LocalClient.Create
	I0610 03:44:02.133065    7920 start.go:128] duration metric: took 2.469810417s to createHost
	I0610 03:44:02.133145    7920 start.go:83] releasing machines lock for "auto-811000", held for 2.470022625s
	W0610 03:44:02.133509    7920 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:44:02.149146    7920 out.go:177] 
	W0610 03:44:02.153191    7920 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:44:02.153250    7920 out.go:239] * 
	* 
	W0610 03:44:02.155693    7920 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:44:02.168122    7920 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (10.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.831250167s)

                                                
                                                
-- stdout --
	* [kindnet-811000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-811000" primary control-plane node in "kindnet-811000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:44:04.344515    8033 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:44:04.344632    8033 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:44:04.344635    8033 out.go:304] Setting ErrFile to fd 2...
	I0610 03:44:04.344637    8033 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:44:04.344756    8033 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:44:04.345845    8033 out.go:298] Setting JSON to false
	I0610 03:44:04.362413    8033 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6215,"bootTime":1718010029,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:44:04.362477    8033 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:44:04.368156    8033 out.go:177] * [kindnet-811000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:44:04.375192    8033 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:44:04.379119    8033 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:44:04.375240    8033 notify.go:220] Checking for updates...
	I0610 03:44:04.385142    8033 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:44:04.388204    8033 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:44:04.391216    8033 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:44:04.394103    8033 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:44:04.397456    8033 config.go:182] Loaded profile config "multinode-763000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:44:04.397529    8033 config.go:182] Loaded profile config "stopped-upgrade-390000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 03:44:04.397579    8033 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:44:04.402196    8033 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:44:04.409222    8033 start.go:297] selected driver: qemu2
	I0610 03:44:04.409229    8033 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:44:04.409236    8033 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:44:04.411555    8033 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:44:04.415183    8033 out.go:177] * Automatically selected the socket_vmnet network
	I0610 03:44:04.418224    8033 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:44:04.418274    8033 cni.go:84] Creating CNI manager for "kindnet"
	I0610 03:44:04.418279    8033 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 03:44:04.418313    8033 start.go:340] cluster config:
	{Name:kindnet-811000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kindnet-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:44:04.422850    8033 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:44:04.430184    8033 out.go:177] * Starting "kindnet-811000" primary control-plane node in "kindnet-811000" cluster
	I0610 03:44:04.434123    8033 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:44:04.434140    8033 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:44:04.434151    8033 cache.go:56] Caching tarball of preloaded images
	I0610 03:44:04.434238    8033 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:44:04.434244    8033 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:44:04.434325    8033 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/kindnet-811000/config.json ...
	I0610 03:44:04.434336    8033 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/kindnet-811000/config.json: {Name:mk713d6d15237bf6493815c5524b3f0d8c7a69c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:44:04.434568    8033 start.go:360] acquireMachinesLock for kindnet-811000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:44:04.434605    8033 start.go:364] duration metric: took 30.042µs to acquireMachinesLock for "kindnet-811000"
	I0610 03:44:04.434615    8033 start.go:93] Provisioning new machine with config: &{Name:kindnet-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kindnet-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:44:04.434645    8033 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:44:04.443154    8033 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 03:44:04.461246    8033 start.go:159] libmachine.API.Create for "kindnet-811000" (driver="qemu2")
	I0610 03:44:04.461281    8033 client.go:168] LocalClient.Create starting
	I0610 03:44:04.461341    8033 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:44:04.461373    8033 main.go:141] libmachine: Decoding PEM data...
	I0610 03:44:04.461385    8033 main.go:141] libmachine: Parsing certificate...
	I0610 03:44:04.461436    8033 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:44:04.461461    8033 main.go:141] libmachine: Decoding PEM data...
	I0610 03:44:04.461471    8033 main.go:141] libmachine: Parsing certificate...
	I0610 03:44:04.461840    8033 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:44:04.606880    8033 main.go:141] libmachine: Creating SSH key...
	I0610 03:44:04.663241    8033 main.go:141] libmachine: Creating Disk image...
	I0610 03:44:04.663247    8033 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:44:04.663405    8033 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kindnet-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kindnet-811000/disk.qcow2
	I0610 03:44:04.676174    8033 main.go:141] libmachine: STDOUT: 
	I0610 03:44:04.676199    8033 main.go:141] libmachine: STDERR: 
	I0610 03:44:04.676255    8033 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kindnet-811000/disk.qcow2 +20000M
	I0610 03:44:04.687278    8033 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:44:04.687296    8033 main.go:141] libmachine: STDERR: 
	I0610 03:44:04.687316    8033 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kindnet-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kindnet-811000/disk.qcow2
	I0610 03:44:04.687322    8033 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:44:04.687351    8033 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kindnet-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kindnet-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kindnet-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:db:27:41:e3:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kindnet-811000/disk.qcow2
	I0610 03:44:04.689084    8033 main.go:141] libmachine: STDOUT: 
	I0610 03:44:04.689098    8033 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:44:04.689118    8033 client.go:171] duration metric: took 227.827458ms to LocalClient.Create
	I0610 03:44:06.691243    8033 start.go:128] duration metric: took 2.256563042s to createHost
	I0610 03:44:06.691268    8033 start.go:83] releasing machines lock for "kindnet-811000", held for 2.256633584s
	W0610 03:44:06.691303    8033 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:44:06.703253    8033 out.go:177] * Deleting "kindnet-811000" in qemu2 ...
	W0610 03:44:06.719552    8033 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:44:06.719568    8033 start.go:728] Will try again in 5 seconds ...
	I0610 03:44:11.720558    8033 start.go:360] acquireMachinesLock for kindnet-811000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:44:11.720812    8033 start.go:364] duration metric: took 201.292µs to acquireMachinesLock for "kindnet-811000"
	I0610 03:44:11.720872    8033 start.go:93] Provisioning new machine with config: &{Name:kindnet-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kindnet-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:44:11.720945    8033 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:44:11.731283    8033 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 03:44:11.752718    8033 start.go:159] libmachine.API.Create for "kindnet-811000" (driver="qemu2")
	I0610 03:44:11.752759    8033 client.go:168] LocalClient.Create starting
	I0610 03:44:11.752832    8033 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:44:11.752876    8033 main.go:141] libmachine: Decoding PEM data...
	I0610 03:44:11.752886    8033 main.go:141] libmachine: Parsing certificate...
	I0610 03:44:11.752924    8033 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:44:11.752952    8033 main.go:141] libmachine: Decoding PEM data...
	I0610 03:44:11.752960    8033 main.go:141] libmachine: Parsing certificate...
	I0610 03:44:11.753277    8033 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:44:11.900771    8033 main.go:141] libmachine: Creating SSH key...
	I0610 03:44:12.088182    8033 main.go:141] libmachine: Creating Disk image...
	I0610 03:44:12.088192    8033 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:44:12.088371    8033 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kindnet-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kindnet-811000/disk.qcow2
	I0610 03:44:12.101291    8033 main.go:141] libmachine: STDOUT: 
	I0610 03:44:12.101313    8033 main.go:141] libmachine: STDERR: 
	I0610 03:44:12.101369    8033 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kindnet-811000/disk.qcow2 +20000M
	I0610 03:44:12.112365    8033 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:44:12.112381    8033 main.go:141] libmachine: STDERR: 
	I0610 03:44:12.112405    8033 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kindnet-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kindnet-811000/disk.qcow2
	I0610 03:44:12.112411    8033 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:44:12.112441    8033 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kindnet-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kindnet-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kindnet-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:20:ff:d1:74:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kindnet-811000/disk.qcow2
	I0610 03:44:12.114180    8033 main.go:141] libmachine: STDOUT: 
	I0610 03:44:12.114194    8033 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:44:12.114207    8033 client.go:171] duration metric: took 361.440417ms to LocalClient.Create
	I0610 03:44:14.114525    8033 start.go:128] duration metric: took 2.393546791s to createHost
	I0610 03:44:14.114549    8033 start.go:83] releasing machines lock for "kindnet-811000", held for 2.393704208s
	W0610 03:44:14.114618    8033 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:44:14.123250    8033 out.go:177] 
	W0610 03:44:14.128394    8033 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:44:14.128402    8033 out.go:239] * 
	* 
	W0610 03:44:14.128906    8033 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:44:14.139171    8033 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (10s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (10.002400416s)

                                                
                                                
-- stdout --
	* [calico-811000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-811000" primary control-plane node in "calico-811000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:44:16.419145    8147 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:44:16.419267    8147 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:44:16.419271    8147 out.go:304] Setting ErrFile to fd 2...
	I0610 03:44:16.419273    8147 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:44:16.419411    8147 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:44:16.420528    8147 out.go:298] Setting JSON to false
	I0610 03:44:16.436844    8147 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6227,"bootTime":1718010029,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:44:16.436924    8147 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:44:16.441508    8147 out.go:177] * [calico-811000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:44:16.449581    8147 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:44:16.449645    8147 notify.go:220] Checking for updates...
	I0610 03:44:16.453636    8147 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:44:16.456551    8147 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:44:16.459601    8147 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:44:16.462566    8147 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:44:16.465604    8147 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:44:16.469007    8147 config.go:182] Loaded profile config "multinode-763000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:44:16.469071    8147 config.go:182] Loaded profile config "stopped-upgrade-390000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 03:44:16.469116    8147 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:44:16.473560    8147 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:44:16.480567    8147 start.go:297] selected driver: qemu2
	I0610 03:44:16.480573    8147 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:44:16.480577    8147 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:44:16.482787    8147 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:44:16.486445    8147 out.go:177] * Automatically selected the socket_vmnet network
	I0610 03:44:16.489676    8147 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:44:16.489714    8147 cni.go:84] Creating CNI manager for "calico"
	I0610 03:44:16.489718    8147 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0610 03:44:16.489747    8147 start.go:340] cluster config:
	{Name:calico-811000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:calico-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:44:16.494309    8147 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:44:16.501491    8147 out.go:177] * Starting "calico-811000" primary control-plane node in "calico-811000" cluster
	I0610 03:44:16.505493    8147 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:44:16.505505    8147 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:44:16.505512    8147 cache.go:56] Caching tarball of preloaded images
	I0610 03:44:16.505562    8147 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:44:16.505566    8147 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:44:16.505614    8147 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/calico-811000/config.json ...
	I0610 03:44:16.505629    8147 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/calico-811000/config.json: {Name:mkae2d2adf8a3322cb345b5ee41930cc99209136 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:44:16.505825    8147 start.go:360] acquireMachinesLock for calico-811000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:44:16.505856    8147 start.go:364] duration metric: took 25.209µs to acquireMachinesLock for "calico-811000"
	I0610 03:44:16.505865    8147 start.go:93] Provisioning new machine with config: &{Name:calico-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:calico-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:44:16.505892    8147 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:44:16.514503    8147 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 03:44:16.529589    8147 start.go:159] libmachine.API.Create for "calico-811000" (driver="qemu2")
	I0610 03:44:16.529615    8147 client.go:168] LocalClient.Create starting
	I0610 03:44:16.529670    8147 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:44:16.529699    8147 main.go:141] libmachine: Decoding PEM data...
	I0610 03:44:16.529711    8147 main.go:141] libmachine: Parsing certificate...
	I0610 03:44:16.529767    8147 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:44:16.529790    8147 main.go:141] libmachine: Decoding PEM data...
	I0610 03:44:16.529799    8147 main.go:141] libmachine: Parsing certificate...
	I0610 03:44:16.530236    8147 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:44:16.675312    8147 main.go:141] libmachine: Creating SSH key...
	I0610 03:44:16.818359    8147 main.go:141] libmachine: Creating Disk image...
	I0610 03:44:16.818366    8147 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:44:16.818572    8147 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/calico-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/calico-811000/disk.qcow2
	I0610 03:44:16.831332    8147 main.go:141] libmachine: STDOUT: 
	I0610 03:44:16.831351    8147 main.go:141] libmachine: STDERR: 
	I0610 03:44:16.831397    8147 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/calico-811000/disk.qcow2 +20000M
	I0610 03:44:16.842481    8147 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:44:16.842507    8147 main.go:141] libmachine: STDERR: 
	I0610 03:44:16.842525    8147 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/calico-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/calico-811000/disk.qcow2
	I0610 03:44:16.842532    8147 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:44:16.842559    8147 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/calico-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/calico-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/calico-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:1e:40:b3:cd:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/calico-811000/disk.qcow2
	I0610 03:44:16.844375    8147 main.go:141] libmachine: STDOUT: 
	I0610 03:44:16.844393    8147 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:44:16.844414    8147 client.go:171] duration metric: took 314.789625ms to LocalClient.Create
	I0610 03:44:18.846120    8147 start.go:128] duration metric: took 2.340183667s to createHost
	I0610 03:44:18.846152    8147 start.go:83] releasing machines lock for "calico-811000", held for 2.340266083s
	W0610 03:44:18.846193    8147 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:44:18.855353    8147 out.go:177] * Deleting "calico-811000" in qemu2 ...
	W0610 03:44:18.875170    8147 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:44:18.875184    8147 start.go:728] Will try again in 5 seconds ...
	I0610 03:44:23.877525    8147 start.go:360] acquireMachinesLock for calico-811000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:44:23.878079    8147 start.go:364] duration metric: took 439.291µs to acquireMachinesLock for "calico-811000"
	I0610 03:44:23.878165    8147 start.go:93] Provisioning new machine with config: &{Name:calico-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:calico-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:44:23.878470    8147 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:44:23.889099    8147 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 03:44:23.935530    8147 start.go:159] libmachine.API.Create for "calico-811000" (driver="qemu2")
	I0610 03:44:23.935580    8147 client.go:168] LocalClient.Create starting
	I0610 03:44:23.935728    8147 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:44:23.935809    8147 main.go:141] libmachine: Decoding PEM data...
	I0610 03:44:23.935824    8147 main.go:141] libmachine: Parsing certificate...
	I0610 03:44:23.935890    8147 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:44:23.935935    8147 main.go:141] libmachine: Decoding PEM data...
	I0610 03:44:23.935952    8147 main.go:141] libmachine: Parsing certificate...
	I0610 03:44:23.936623    8147 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:44:24.090180    8147 main.go:141] libmachine: Creating SSH key...
	I0610 03:44:24.319310    8147 main.go:141] libmachine: Creating Disk image...
	I0610 03:44:24.319320    8147 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:44:24.319547    8147 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/calico-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/calico-811000/disk.qcow2
	I0610 03:44:24.332866    8147 main.go:141] libmachine: STDOUT: 
	I0610 03:44:24.332891    8147 main.go:141] libmachine: STDERR: 
	I0610 03:44:24.332953    8147 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/calico-811000/disk.qcow2 +20000M
	I0610 03:44:24.344472    8147 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:44:24.344493    8147 main.go:141] libmachine: STDERR: 
	I0610 03:44:24.344508    8147 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/calico-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/calico-811000/disk.qcow2
	I0610 03:44:24.344526    8147 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:44:24.344584    8147 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/calico-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/calico-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/calico-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:71:50:b9:17:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/calico-811000/disk.qcow2
	I0610 03:44:24.346378    8147 main.go:141] libmachine: STDOUT: 
	I0610 03:44:24.346393    8147 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:44:24.346405    8147 client.go:171] duration metric: took 410.813875ms to LocalClient.Create
	I0610 03:44:26.348638    8147 start.go:128] duration metric: took 2.47010025s to createHost
	I0610 03:44:26.348721    8147 start.go:83] releasing machines lock for "calico-811000", held for 2.470586667s
	W0610 03:44:26.349080    8147 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:44:26.358718    8147 out.go:177] 
	W0610 03:44:26.364963    8147 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:44:26.365024    8147 out.go:239] * 
	* 
	W0610 03:44:26.367539    8147 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:44:26.378614    8147 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (10.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.899651792s)

                                                
                                                
-- stdout --
	* [custom-flannel-811000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-811000" primary control-plane node in "custom-flannel-811000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:44:28.825948    8265 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:44:28.826090    8265 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:44:28.826094    8265 out.go:304] Setting ErrFile to fd 2...
	I0610 03:44:28.826096    8265 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:44:28.826225    8265 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:44:28.827329    8265 out.go:298] Setting JSON to false
	I0610 03:44:28.843916    8265 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6239,"bootTime":1718010029,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:44:28.843986    8265 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:44:28.850149    8265 out.go:177] * [custom-flannel-811000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:44:28.857123    8265 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:44:28.860193    8265 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:44:28.857184    8265 notify.go:220] Checking for updates...
	I0610 03:44:28.866118    8265 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:44:28.870071    8265 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:44:28.873041    8265 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:44:28.876106    8265 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:44:28.879595    8265 config.go:182] Loaded profile config "multinode-763000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:44:28.879662    8265 config.go:182] Loaded profile config "stopped-upgrade-390000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 03:44:28.879717    8265 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:44:28.884119    8265 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:44:28.891120    8265 start.go:297] selected driver: qemu2
	I0610 03:44:28.891125    8265 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:44:28.891130    8265 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:44:28.893275    8265 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:44:28.896080    8265 out.go:177] * Automatically selected the socket_vmnet network
	I0610 03:44:28.899170    8265 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:44:28.899200    8265 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0610 03:44:28.899208    8265 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0610 03:44:28.899239    8265 start.go:340] cluster config:
	{Name:custom-flannel-811000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:custom-flannel-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:44:28.903339    8265 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:44:28.912102    8265 out.go:177] * Starting "custom-flannel-811000" primary control-plane node in "custom-flannel-811000" cluster
	I0610 03:44:28.916162    8265 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:44:28.916189    8265 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:44:28.916201    8265 cache.go:56] Caching tarball of preloaded images
	I0610 03:44:28.916271    8265 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:44:28.916277    8265 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:44:28.916343    8265 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/custom-flannel-811000/config.json ...
	I0610 03:44:28.916354    8265 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/custom-flannel-811000/config.json: {Name:mk45303fd63cd8c7b87b635b2f91bb6d1b5eeabf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:44:28.916564    8265 start.go:360] acquireMachinesLock for custom-flannel-811000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:44:28.916596    8265 start.go:364] duration metric: took 25.042µs to acquireMachinesLock for "custom-flannel-811000"
	I0610 03:44:28.916605    8265 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:custom-flannel-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:44:28.916642    8265 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:44:28.925091    8265 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 03:44:28.940151    8265 start.go:159] libmachine.API.Create for "custom-flannel-811000" (driver="qemu2")
	I0610 03:44:28.940184    8265 client.go:168] LocalClient.Create starting
	I0610 03:44:28.940246    8265 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:44:28.940274    8265 main.go:141] libmachine: Decoding PEM data...
	I0610 03:44:28.940289    8265 main.go:141] libmachine: Parsing certificate...
	I0610 03:44:28.940338    8265 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:44:28.940361    8265 main.go:141] libmachine: Decoding PEM data...
	I0610 03:44:28.940368    8265 main.go:141] libmachine: Parsing certificate...
	I0610 03:44:28.940711    8265 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:44:29.084197    8265 main.go:141] libmachine: Creating SSH key...
	I0610 03:44:29.263299    8265 main.go:141] libmachine: Creating Disk image...
	I0610 03:44:29.263306    8265 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:44:29.263490    8265 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/custom-flannel-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/custom-flannel-811000/disk.qcow2
	I0610 03:44:29.276455    8265 main.go:141] libmachine: STDOUT: 
	I0610 03:44:29.276477    8265 main.go:141] libmachine: STDERR: 
	I0610 03:44:29.276539    8265 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/custom-flannel-811000/disk.qcow2 +20000M
	I0610 03:44:29.287464    8265 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:44:29.287491    8265 main.go:141] libmachine: STDERR: 
	I0610 03:44:29.287506    8265 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/custom-flannel-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/custom-flannel-811000/disk.qcow2
	I0610 03:44:29.287511    8265 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:44:29.287539    8265 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/custom-flannel-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/custom-flannel-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/custom-flannel-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:3f:bd:56:67:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/custom-flannel-811000/disk.qcow2
	I0610 03:44:29.289201    8265 main.go:141] libmachine: STDOUT: 
	I0610 03:44:29.289217    8265 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:44:29.289238    8265 client.go:171] duration metric: took 349.044167ms to LocalClient.Create
	I0610 03:44:31.291617    8265 start.go:128] duration metric: took 2.37492075s to createHost
	I0610 03:44:31.291712    8265 start.go:83] releasing machines lock for "custom-flannel-811000", held for 2.375081291s
	W0610 03:44:31.291808    8265 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:44:31.303248    8265 out.go:177] * Deleting "custom-flannel-811000" in qemu2 ...
	W0610 03:44:31.331594    8265 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:44:31.331628    8265 start.go:728] Will try again in 5 seconds ...
	I0610 03:44:36.334070    8265 start.go:360] acquireMachinesLock for custom-flannel-811000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:44:36.334650    8265 start.go:364] duration metric: took 455.042µs to acquireMachinesLock for "custom-flannel-811000"
	I0610 03:44:36.334922    8265 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.1 ClusterName:custom-flannel-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:44:36.335233    8265 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:44:36.344790    8265 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 03:44:36.394734    8265 start.go:159] libmachine.API.Create for "custom-flannel-811000" (driver="qemu2")
	I0610 03:44:36.394783    8265 client.go:168] LocalClient.Create starting
	I0610 03:44:36.394925    8265 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:44:36.394992    8265 main.go:141] libmachine: Decoding PEM data...
	I0610 03:44:36.395005    8265 main.go:141] libmachine: Parsing certificate...
	I0610 03:44:36.395085    8265 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:44:36.395153    8265 main.go:141] libmachine: Decoding PEM data...
	I0610 03:44:36.395164    8265 main.go:141] libmachine: Parsing certificate...
	I0610 03:44:36.395829    8265 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:44:36.549107    8265 main.go:141] libmachine: Creating SSH key...
	I0610 03:44:36.630592    8265 main.go:141] libmachine: Creating Disk image...
	I0610 03:44:36.630600    8265 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:44:36.630812    8265 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/custom-flannel-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/custom-flannel-811000/disk.qcow2
	I0610 03:44:36.643514    8265 main.go:141] libmachine: STDOUT: 
	I0610 03:44:36.643551    8265 main.go:141] libmachine: STDERR: 
	I0610 03:44:36.643618    8265 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/custom-flannel-811000/disk.qcow2 +20000M
	I0610 03:44:36.654955    8265 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:44:36.654974    8265 main.go:141] libmachine: STDERR: 
	I0610 03:44:36.654985    8265 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/custom-flannel-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/custom-flannel-811000/disk.qcow2
	I0610 03:44:36.654988    8265 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:44:36.655023    8265 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/custom-flannel-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/custom-flannel-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/custom-flannel-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:eb:1f:ab:3d:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/custom-flannel-811000/disk.qcow2
	I0610 03:44:36.656739    8265 main.go:141] libmachine: STDOUT: 
	I0610 03:44:36.656755    8265 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:44:36.656768    8265 client.go:171] duration metric: took 261.977708ms to LocalClient.Create
	I0610 03:44:38.659009    8265 start.go:128] duration metric: took 2.323709792s to createHost
	I0610 03:44:38.659107    8265 start.go:83] releasing machines lock for "custom-flannel-811000", held for 2.32440725s
	W0610 03:44:38.659506    8265 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:44:38.668134    8265 out.go:177] 
	W0610 03:44:38.674290    8265 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:44:38.674338    8265 out.go:239] * 
	* 
	W0610 03:44:38.677075    8265 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:44:38.684112    8265 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.788296125s)

                                                
                                                
-- stdout --
	* [false-811000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-811000" primary control-plane node in "false-811000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:44:41.114072    8383 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:44:41.114218    8383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:44:41.114221    8383 out.go:304] Setting ErrFile to fd 2...
	I0610 03:44:41.114223    8383 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:44:41.114360    8383 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:44:41.115438    8383 out.go:298] Setting JSON to false
	I0610 03:44:41.131769    8383 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6252,"bootTime":1718010029,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:44:41.131832    8383 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:44:41.135922    8383 out.go:177] * [false-811000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:44:41.144010    8383 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:44:41.148010    8383 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:44:41.144110    8383 notify.go:220] Checking for updates...
	I0610 03:44:41.153988    8383 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:44:41.157033    8383 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:44:41.159996    8383 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:44:41.163005    8383 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:44:41.166270    8383 config.go:182] Loaded profile config "multinode-763000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:44:41.166337    8383 config.go:182] Loaded profile config "stopped-upgrade-390000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 03:44:41.166386    8383 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:44:41.170949    8383 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:44:41.176891    8383 start.go:297] selected driver: qemu2
	I0610 03:44:41.176898    8383 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:44:41.176904    8383 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:44:41.179052    8383 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:44:41.181978    8383 out.go:177] * Automatically selected the socket_vmnet network
	I0610 03:44:41.185055    8383 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:44:41.185068    8383 cni.go:84] Creating CNI manager for "false"
	I0610 03:44:41.185099    8383 start.go:340] cluster config:
	{Name:false-811000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:false-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:44:41.189464    8383 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:44:41.197025    8383 out.go:177] * Starting "false-811000" primary control-plane node in "false-811000" cluster
	I0610 03:44:41.201001    8383 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:44:41.201014    8383 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:44:41.201020    8383 cache.go:56] Caching tarball of preloaded images
	I0610 03:44:41.201069    8383 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:44:41.201074    8383 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:44:41.201134    8383 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/false-811000/config.json ...
	I0610 03:44:41.201144    8383 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/false-811000/config.json: {Name:mk58df70c667e4f3bf7a1f33d04ac8f60b717bb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:44:41.201358    8383 start.go:360] acquireMachinesLock for false-811000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:44:41.201390    8383 start.go:364] duration metric: took 26.042µs to acquireMachinesLock for "false-811000"
	I0610 03:44:41.201400    8383 start.go:93] Provisioning new machine with config: &{Name:false-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:false-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:44:41.201427    8383 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:44:41.208981    8383 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 03:44:41.224197    8383 start.go:159] libmachine.API.Create for "false-811000" (driver="qemu2")
	I0610 03:44:41.224225    8383 client.go:168] LocalClient.Create starting
	I0610 03:44:41.224309    8383 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:44:41.224340    8383 main.go:141] libmachine: Decoding PEM data...
	I0610 03:44:41.224350    8383 main.go:141] libmachine: Parsing certificate...
	I0610 03:44:41.224392    8383 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:44:41.224415    8383 main.go:141] libmachine: Decoding PEM data...
	I0610 03:44:41.224428    8383 main.go:141] libmachine: Parsing certificate...
	I0610 03:44:41.224804    8383 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:44:41.371046    8383 main.go:141] libmachine: Creating SSH key...
	I0610 03:44:41.470668    8383 main.go:141] libmachine: Creating Disk image...
	I0610 03:44:41.470675    8383 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:44:41.470865    8383 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/false-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/false-811000/disk.qcow2
	I0610 03:44:41.483579    8383 main.go:141] libmachine: STDOUT: 
	I0610 03:44:41.483599    8383 main.go:141] libmachine: STDERR: 
	I0610 03:44:41.483655    8383 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/false-811000/disk.qcow2 +20000M
	I0610 03:44:41.494867    8383 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:44:41.494880    8383 main.go:141] libmachine: STDERR: 
	I0610 03:44:41.494895    8383 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/false-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/false-811000/disk.qcow2
	I0610 03:44:41.494900    8383 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:44:41.494935    8383 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/false-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/false-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/false-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:47:7d:27:06:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/false-811000/disk.qcow2
	I0610 03:44:41.496606    8383 main.go:141] libmachine: STDOUT: 
	I0610 03:44:41.496620    8383 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:44:41.496639    8383 client.go:171] duration metric: took 272.405ms to LocalClient.Create
	I0610 03:44:43.498980    8383 start.go:128] duration metric: took 2.29742175s to createHost
	I0610 03:44:43.499066    8383 start.go:83] releasing machines lock for "false-811000", held for 2.297642042s
	W0610 03:44:43.499150    8383 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:44:43.509771    8383 out.go:177] * Deleting "false-811000" in qemu2 ...
	W0610 03:44:43.537612    8383 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:44:43.537642    8383 start.go:728] Will try again in 5 seconds ...
	I0610 03:44:48.539867    8383 start.go:360] acquireMachinesLock for false-811000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:44:48.540377    8383 start.go:364] duration metric: took 355.583µs to acquireMachinesLock for "false-811000"
	I0610 03:44:48.540434    8383 start.go:93] Provisioning new machine with config: &{Name:false-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:false-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:44:48.540753    8383 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:44:48.556206    8383 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 03:44:48.603534    8383 start.go:159] libmachine.API.Create for "false-811000" (driver="qemu2")
	I0610 03:44:48.603590    8383 client.go:168] LocalClient.Create starting
	I0610 03:44:48.603710    8383 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:44:48.603800    8383 main.go:141] libmachine: Decoding PEM data...
	I0610 03:44:48.603817    8383 main.go:141] libmachine: Parsing certificate...
	I0610 03:44:48.603877    8383 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:44:48.603923    8383 main.go:141] libmachine: Decoding PEM data...
	I0610 03:44:48.603940    8383 main.go:141] libmachine: Parsing certificate...
	I0610 03:44:48.604619    8383 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:44:48.759823    8383 main.go:141] libmachine: Creating SSH key...
	I0610 03:44:48.810612    8383 main.go:141] libmachine: Creating Disk image...
	I0610 03:44:48.810619    8383 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:44:48.810795    8383 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/false-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/false-811000/disk.qcow2
	I0610 03:44:48.823405    8383 main.go:141] libmachine: STDOUT: 
	I0610 03:44:48.823424    8383 main.go:141] libmachine: STDERR: 
	I0610 03:44:48.823484    8383 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/false-811000/disk.qcow2 +20000M
	I0610 03:44:48.834407    8383 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:44:48.834435    8383 main.go:141] libmachine: STDERR: 
	I0610 03:44:48.834450    8383 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/false-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/false-811000/disk.qcow2
	I0610 03:44:48.834455    8383 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:44:48.834497    8383 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/false-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/false-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/false-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:3c:4a:6f:59:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/false-811000/disk.qcow2
	I0610 03:44:48.836263    8383 main.go:141] libmachine: STDOUT: 
	I0610 03:44:48.836281    8383 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:44:48.836294    8383 client.go:171] duration metric: took 232.694167ms to LocalClient.Create
	I0610 03:44:50.838526    8383 start.go:128] duration metric: took 2.297707083s to createHost
	I0610 03:44:50.838643    8383 start.go:83] releasing machines lock for "false-811000", held for 2.298217583s
	W0610 03:44:50.838981    8383 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:44:50.847469    8383 out.go:177] 
	W0610 03:44:50.853682    8383 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:44:50.853757    8383 out.go:239] * 
	* 
	W0610 03:44:50.855896    8383 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:44:50.864657    8383 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (10.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (10.023157084s)

                                                
                                                
-- stdout --
	* [enable-default-cni-811000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-811000" primary control-plane node in "enable-default-cni-811000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:44:53.071149    8499 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:44:53.071277    8499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:44:53.071279    8499 out.go:304] Setting ErrFile to fd 2...
	I0610 03:44:53.071282    8499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:44:53.071445    8499 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:44:53.072483    8499 out.go:298] Setting JSON to false
	I0610 03:44:53.089066    8499 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6264,"bootTime":1718010029,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:44:53.089164    8499 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:44:53.095014    8499 out.go:177] * [enable-default-cni-811000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:44:53.102864    8499 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:44:53.102928    8499 notify.go:220] Checking for updates...
	I0610 03:44:53.106986    8499 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:44:53.110959    8499 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:44:53.113935    8499 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:44:53.117034    8499 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:44:53.120013    8499 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:44:53.121973    8499 config.go:182] Loaded profile config "multinode-763000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:44:53.122044    8499 config.go:182] Loaded profile config "stopped-upgrade-390000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 03:44:53.122093    8499 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:44:53.126010    8499 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:44:53.132881    8499 start.go:297] selected driver: qemu2
	I0610 03:44:53.132887    8499 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:44:53.132893    8499 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:44:53.135080    8499 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:44:53.138961    8499 out.go:177] * Automatically selected the socket_vmnet network
	E0610 03:44:53.142035    8499 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0610 03:44:53.142049    8499 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:44:53.142063    8499 cni.go:84] Creating CNI manager for "bridge"
	I0610 03:44:53.142070    8499 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 03:44:53.142114    8499 start.go:340] cluster config:
	{Name:enable-default-cni-811000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:enable-default-cni-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:44:53.146581    8499 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:44:53.156053    8499 out.go:177] * Starting "enable-default-cni-811000" primary control-plane node in "enable-default-cni-811000" cluster
	I0610 03:44:53.159033    8499 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:44:53.159051    8499 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:44:53.159059    8499 cache.go:56] Caching tarball of preloaded images
	I0610 03:44:53.159126    8499 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:44:53.159132    8499 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:44:53.159217    8499 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/enable-default-cni-811000/config.json ...
	I0610 03:44:53.159234    8499 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/enable-default-cni-811000/config.json: {Name:mkf9293874e3cfae4d103c30504e0470d3716d29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:44:53.159525    8499 start.go:360] acquireMachinesLock for enable-default-cni-811000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:44:53.159559    8499 start.go:364] duration metric: took 26.041µs to acquireMachinesLock for "enable-default-cni-811000"
	I0610 03:44:53.159569    8499 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:enable-default-cni-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:44:53.159600    8499 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:44:53.164055    8499 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 03:44:53.179420    8499 start.go:159] libmachine.API.Create for "enable-default-cni-811000" (driver="qemu2")
	I0610 03:44:53.179454    8499 client.go:168] LocalClient.Create starting
	I0610 03:44:53.179513    8499 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:44:53.179545    8499 main.go:141] libmachine: Decoding PEM data...
	I0610 03:44:53.179552    8499 main.go:141] libmachine: Parsing certificate...
	I0610 03:44:53.179596    8499 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:44:53.179618    8499 main.go:141] libmachine: Decoding PEM data...
	I0610 03:44:53.179627    8499 main.go:141] libmachine: Parsing certificate...
	I0610 03:44:53.180021    8499 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:44:53.324073    8499 main.go:141] libmachine: Creating SSH key...
	I0610 03:44:53.425302    8499 main.go:141] libmachine: Creating Disk image...
	I0610 03:44:53.425313    8499 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:44:53.425519    8499 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/enable-default-cni-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/enable-default-cni-811000/disk.qcow2
	I0610 03:44:53.439859    8499 main.go:141] libmachine: STDOUT: 
	I0610 03:44:53.439882    8499 main.go:141] libmachine: STDERR: 
	I0610 03:44:53.439951    8499 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/enable-default-cni-811000/disk.qcow2 +20000M
	I0610 03:44:53.452998    8499 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:44:53.453020    8499 main.go:141] libmachine: STDERR: 
	I0610 03:44:53.453041    8499 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/enable-default-cni-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/enable-default-cni-811000/disk.qcow2
	I0610 03:44:53.453046    8499 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:44:53.453090    8499 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/enable-default-cni-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/enable-default-cni-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/enable-default-cni-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:ca:62:f9:43:cf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/enable-default-cni-811000/disk.qcow2
	I0610 03:44:53.455167    8499 main.go:141] libmachine: STDOUT: 
	I0610 03:44:53.455186    8499 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:44:53.455210    8499 client.go:171] duration metric: took 275.747459ms to LocalClient.Create
	I0610 03:44:55.457556    8499 start.go:128] duration metric: took 2.2978815s to createHost
	I0610 03:44:55.457682    8499 start.go:83] releasing machines lock for "enable-default-cni-811000", held for 2.298089333s
	W0610 03:44:55.457762    8499 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:44:55.464565    8499 out.go:177] * Deleting "enable-default-cni-811000" in qemu2 ...
	W0610 03:44:55.493791    8499 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:44:55.493822    8499 start.go:728] Will try again in 5 seconds ...
	I0610 03:45:00.496079    8499 start.go:360] acquireMachinesLock for enable-default-cni-811000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:45:00.496727    8499 start.go:364] duration metric: took 498.042µs to acquireMachinesLock for "enable-default-cni-811000"
	I0610 03:45:00.496876    8499 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:enable-default-cni-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:45:00.497202    8499 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:45:00.504126    8499 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 03:45:00.552075    8499 start.go:159] libmachine.API.Create for "enable-default-cni-811000" (driver="qemu2")
	I0610 03:45:00.552128    8499 client.go:168] LocalClient.Create starting
	I0610 03:45:00.552252    8499 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:45:00.552333    8499 main.go:141] libmachine: Decoding PEM data...
	I0610 03:45:00.552349    8499 main.go:141] libmachine: Parsing certificate...
	I0610 03:45:00.552406    8499 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:45:00.552465    8499 main.go:141] libmachine: Decoding PEM data...
	I0610 03:45:00.552487    8499 main.go:141] libmachine: Parsing certificate...
	I0610 03:45:00.553416    8499 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:45:00.707284    8499 main.go:141] libmachine: Creating SSH key...
	I0610 03:45:01.001766    8499 main.go:141] libmachine: Creating Disk image...
	I0610 03:45:01.001777    8499 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:45:01.001976    8499 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/enable-default-cni-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/enable-default-cni-811000/disk.qcow2
	I0610 03:45:01.015026    8499 main.go:141] libmachine: STDOUT: 
	I0610 03:45:01.015046    8499 main.go:141] libmachine: STDERR: 
	I0610 03:45:01.015127    8499 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/enable-default-cni-811000/disk.qcow2 +20000M
	I0610 03:45:01.026582    8499 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:45:01.026598    8499 main.go:141] libmachine: STDERR: 
	I0610 03:45:01.026617    8499 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/enable-default-cni-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/enable-default-cni-811000/disk.qcow2
	I0610 03:45:01.026622    8499 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:45:01.026672    8499 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/enable-default-cni-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/enable-default-cni-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/enable-default-cni-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:b2:dd:1b:bc:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/enable-default-cni-811000/disk.qcow2
	I0610 03:45:01.028470    8499 main.go:141] libmachine: STDOUT: 
	I0610 03:45:01.028484    8499 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:45:01.028497    8499 client.go:171] duration metric: took 476.35775ms to LocalClient.Create
	I0610 03:45:03.030861    8499 start.go:128] duration metric: took 2.533546334s to createHost
	I0610 03:45:03.030965    8499 start.go:83] releasing machines lock for "enable-default-cni-811000", held for 2.534185125s
	W0610 03:45:03.031273    8499 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:45:03.040788    8499 out.go:177] 
	W0610 03:45:03.044781    8499 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:45:03.044826    8499 out.go:239] * 
	* 
	W0610 03:45:03.046368    8499 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:45:03.056706    8499 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (10.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.742850834s)

                                                
                                                
-- stdout --
	* [flannel-811000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-811000" primary control-plane node in "flannel-811000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:45:05.268201    8613 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:45:05.268319    8613 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:45:05.268323    8613 out.go:304] Setting ErrFile to fd 2...
	I0610 03:45:05.268325    8613 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:45:05.268476    8613 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:45:05.269552    8613 out.go:298] Setting JSON to false
	I0610 03:45:05.286162    8613 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6276,"bootTime":1718010029,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:45:05.286235    8613 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:45:05.292014    8613 out.go:177] * [flannel-811000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:45:05.299021    8613 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:45:05.302018    8613 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:45:05.299068    8613 notify.go:220] Checking for updates...
	I0610 03:45:05.308008    8613 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:45:05.311035    8613 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:45:05.313951    8613 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:45:05.317004    8613 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:45:05.320345    8613 config.go:182] Loaded profile config "multinode-763000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:45:05.320411    8613 config.go:182] Loaded profile config "stopped-upgrade-390000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 03:45:05.320468    8613 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:45:05.324999    8613 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:45:05.332001    8613 start.go:297] selected driver: qemu2
	I0610 03:45:05.332006    8613 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:45:05.332011    8613 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:45:05.334188    8613 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:45:05.337956    8613 out.go:177] * Automatically selected the socket_vmnet network
	I0610 03:45:05.341079    8613 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:45:05.341133    8613 cni.go:84] Creating CNI manager for "flannel"
	I0610 03:45:05.341137    8613 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0610 03:45:05.341175    8613 start.go:340] cluster config:
	{Name:flannel-811000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:flannel-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:45:05.345604    8613 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:45:05.350921    8613 out.go:177] * Starting "flannel-811000" primary control-plane node in "flannel-811000" cluster
	I0610 03:45:05.355009    8613 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:45:05.355033    8613 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:45:05.355042    8613 cache.go:56] Caching tarball of preloaded images
	I0610 03:45:05.355119    8613 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:45:05.355124    8613 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:45:05.355197    8613 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/flannel-811000/config.json ...
	I0610 03:45:05.355207    8613 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/flannel-811000/config.json: {Name:mk5415e56d818128a8f469f3122ee54916e579e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:45:05.355407    8613 start.go:360] acquireMachinesLock for flannel-811000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:45:05.355437    8613 start.go:364] duration metric: took 24.667µs to acquireMachinesLock for "flannel-811000"
	I0610 03:45:05.355447    8613 start.go:93] Provisioning new machine with config: &{Name:flannel-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:flannel-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:45:05.355475    8613 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:45:05.362922    8613 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 03:45:05.377656    8613 start.go:159] libmachine.API.Create for "flannel-811000" (driver="qemu2")
	I0610 03:45:05.377684    8613 client.go:168] LocalClient.Create starting
	I0610 03:45:05.377744    8613 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:45:05.377772    8613 main.go:141] libmachine: Decoding PEM data...
	I0610 03:45:05.377784    8613 main.go:141] libmachine: Parsing certificate...
	I0610 03:45:05.377830    8613 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:45:05.377855    8613 main.go:141] libmachine: Decoding PEM data...
	I0610 03:45:05.377869    8613 main.go:141] libmachine: Parsing certificate...
	I0610 03:45:05.378226    8613 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:45:05.524897    8613 main.go:141] libmachine: Creating SSH key...
	I0610 03:45:05.577911    8613 main.go:141] libmachine: Creating Disk image...
	I0610 03:45:05.577916    8613 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:45:05.578093    8613 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/flannel-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/flannel-811000/disk.qcow2
	I0610 03:45:05.590547    8613 main.go:141] libmachine: STDOUT: 
	I0610 03:45:05.590574    8613 main.go:141] libmachine: STDERR: 
	I0610 03:45:05.590629    8613 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/flannel-811000/disk.qcow2 +20000M
	I0610 03:45:05.601972    8613 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:45:05.601993    8613 main.go:141] libmachine: STDERR: 
	I0610 03:45:05.602015    8613 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/flannel-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/flannel-811000/disk.qcow2
	I0610 03:45:05.602021    8613 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:45:05.602057    8613 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/flannel-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/flannel-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/flannel-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:49:56:fb:51:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/flannel-811000/disk.qcow2
	I0610 03:45:05.603755    8613 main.go:141] libmachine: STDOUT: 
	I0610 03:45:05.603768    8613 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:45:05.603790    8613 client.go:171] duration metric: took 226.096709ms to LocalClient.Create
	I0610 03:45:07.606057    8613 start.go:128] duration metric: took 2.250531875s to createHost
	I0610 03:45:07.606134    8613 start.go:83] releasing machines lock for "flannel-811000", held for 2.250663542s
	W0610 03:45:07.606212    8613 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:45:07.622546    8613 out.go:177] * Deleting "flannel-811000" in qemu2 ...
	W0610 03:45:07.649522    8613 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:45:07.649557    8613 start.go:728] Will try again in 5 seconds ...
	I0610 03:45:12.651810    8613 start.go:360] acquireMachinesLock for flannel-811000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:45:12.652366    8613 start.go:364] duration metric: took 445.417µs to acquireMachinesLock for "flannel-811000"
	I0610 03:45:12.652447    8613 start.go:93] Provisioning new machine with config: &{Name:flannel-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:flannel-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:45:12.652812    8613 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:45:12.660479    8613 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 03:45:12.710412    8613 start.go:159] libmachine.API.Create for "flannel-811000" (driver="qemu2")
	I0610 03:45:12.710469    8613 client.go:168] LocalClient.Create starting
	I0610 03:45:12.710589    8613 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:45:12.710654    8613 main.go:141] libmachine: Decoding PEM data...
	I0610 03:45:12.710673    8613 main.go:141] libmachine: Parsing certificate...
	I0610 03:45:12.710750    8613 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:45:12.710797    8613 main.go:141] libmachine: Decoding PEM data...
	I0610 03:45:12.710810    8613 main.go:141] libmachine: Parsing certificate...
	I0610 03:45:12.711333    8613 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:45:12.867924    8613 main.go:141] libmachine: Creating SSH key...
	I0610 03:45:12.913076    8613 main.go:141] libmachine: Creating Disk image...
	I0610 03:45:12.913082    8613 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:45:12.913258    8613 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/flannel-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/flannel-811000/disk.qcow2
	I0610 03:45:12.925794    8613 main.go:141] libmachine: STDOUT: 
	I0610 03:45:12.925814    8613 main.go:141] libmachine: STDERR: 
	I0610 03:45:12.925877    8613 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/flannel-811000/disk.qcow2 +20000M
	I0610 03:45:12.937274    8613 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:45:12.937292    8613 main.go:141] libmachine: STDERR: 
	I0610 03:45:12.937309    8613 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/flannel-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/flannel-811000/disk.qcow2
	I0610 03:45:12.937313    8613 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:45:12.937344    8613 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/flannel-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/flannel-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/flannel-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:fd:55:a9:d6:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/flannel-811000/disk.qcow2
	I0610 03:45:12.939090    8613 main.go:141] libmachine: STDOUT: 
	I0610 03:45:12.939106    8613 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:45:12.939117    8613 client.go:171] duration metric: took 228.638375ms to LocalClient.Create
	I0610 03:45:14.941503    8613 start.go:128] duration metric: took 2.288579334s to createHost
	I0610 03:45:14.941606    8613 start.go:83] releasing machines lock for "flannel-811000", held for 2.289181s
	W0610 03:45:14.942007    8613 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:45:14.953601    8613 out.go:177] 
	W0610 03:45:14.957587    8613 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:45:14.957604    8613 out.go:239] * 
	* 
	W0610 03:45:14.959441    8613 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:45:14.968380    8613 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.876705625s)

                                                
                                                
-- stdout --
	* [bridge-811000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-811000" primary control-plane node in "bridge-811000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:45:17.395207    8737 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:45:17.395368    8737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:45:17.395371    8737 out.go:304] Setting ErrFile to fd 2...
	I0610 03:45:17.395373    8737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:45:17.395504    8737 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:45:17.396546    8737 out.go:298] Setting JSON to false
	I0610 03:45:17.413222    8737 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6288,"bootTime":1718010029,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:45:17.413284    8737 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:45:17.420483    8737 out.go:177] * [bridge-811000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:45:17.433430    8737 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:45:17.428487    8737 notify.go:220] Checking for updates...
	I0610 03:45:17.439309    8737 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:45:17.442353    8737 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:45:17.445389    8737 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:45:17.448283    8737 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:45:17.451427    8737 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:45:17.454797    8737 config.go:182] Loaded profile config "multinode-763000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:45:17.454859    8737 config.go:182] Loaded profile config "stopped-upgrade-390000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 03:45:17.454905    8737 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:45:17.458402    8737 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:45:17.465422    8737 start.go:297] selected driver: qemu2
	I0610 03:45:17.465430    8737 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:45:17.465436    8737 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:45:17.467559    8737 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:45:17.469139    8737 out.go:177] * Automatically selected the socket_vmnet network
	I0610 03:45:17.472464    8737 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:45:17.472501    8737 cni.go:84] Creating CNI manager for "bridge"
	I0610 03:45:17.472505    8737 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 03:45:17.472541    8737 start.go:340] cluster config:
	{Name:bridge-811000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:bridge-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:45:17.476817    8737 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:45:17.484344    8737 out.go:177] * Starting "bridge-811000" primary control-plane node in "bridge-811000" cluster
	I0610 03:45:17.488378    8737 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:45:17.488394    8737 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:45:17.488401    8737 cache.go:56] Caching tarball of preloaded images
	I0610 03:45:17.488457    8737 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:45:17.488463    8737 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:45:17.488523    8737 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/bridge-811000/config.json ...
	I0610 03:45:17.488533    8737 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/bridge-811000/config.json: {Name:mk186a2772c74c8672864edcf0bb473c71830fa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:45:17.488938    8737 start.go:360] acquireMachinesLock for bridge-811000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:45:17.488973    8737 start.go:364] duration metric: took 29.084µs to acquireMachinesLock for "bridge-811000"
	I0610 03:45:17.488984    8737 start.go:93] Provisioning new machine with config: &{Name:bridge-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:bridge-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:45:17.489010    8737 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:45:17.497353    8737 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 03:45:17.514022    8737 start.go:159] libmachine.API.Create for "bridge-811000" (driver="qemu2")
	I0610 03:45:17.514050    8737 client.go:168] LocalClient.Create starting
	I0610 03:45:17.514122    8737 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:45:17.514153    8737 main.go:141] libmachine: Decoding PEM data...
	I0610 03:45:17.514166    8737 main.go:141] libmachine: Parsing certificate...
	I0610 03:45:17.514206    8737 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:45:17.514231    8737 main.go:141] libmachine: Decoding PEM data...
	I0610 03:45:17.514238    8737 main.go:141] libmachine: Parsing certificate...
	I0610 03:45:17.514603    8737 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:45:17.659440    8737 main.go:141] libmachine: Creating SSH key...
	I0610 03:45:17.885487    8737 main.go:141] libmachine: Creating Disk image...
	I0610 03:45:17.885500    8737 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:45:17.885725    8737 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/bridge-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/bridge-811000/disk.qcow2
	I0610 03:45:17.898736    8737 main.go:141] libmachine: STDOUT: 
	I0610 03:45:17.898760    8737 main.go:141] libmachine: STDERR: 
	I0610 03:45:17.898821    8737 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/bridge-811000/disk.qcow2 +20000M
	I0610 03:45:17.909959    8737 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:45:17.909972    8737 main.go:141] libmachine: STDERR: 
	I0610 03:45:17.909994    8737 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/bridge-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/bridge-811000/disk.qcow2
	I0610 03:45:17.909998    8737 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:45:17.910027    8737 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/bridge-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/bridge-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/bridge-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:ba:88:c4:2d:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/bridge-811000/disk.qcow2
	I0610 03:45:17.911734    8737 main.go:141] libmachine: STDOUT: 
	I0610 03:45:17.911748    8737 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:45:17.911766    8737 client.go:171] duration metric: took 397.704833ms to LocalClient.Create
	I0610 03:45:19.913897    8737 start.go:128] duration metric: took 2.424851542s to createHost
	I0610 03:45:19.913937    8737 start.go:83] releasing machines lock for "bridge-811000", held for 2.424932625s
	W0610 03:45:19.913971    8737 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:45:19.926995    8737 out.go:177] * Deleting "bridge-811000" in qemu2 ...
	W0610 03:45:19.946806    8737 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:45:19.946818    8737 start.go:728] Will try again in 5 seconds ...
	I0610 03:45:24.948991    8737 start.go:360] acquireMachinesLock for bridge-811000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:45:24.949094    8737 start.go:364] duration metric: took 82.042µs to acquireMachinesLock for "bridge-811000"
	I0610 03:45:24.949108    8737 start.go:93] Provisioning new machine with config: &{Name:bridge-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.1 ClusterName:bridge-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:45:24.949152    8737 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:45:24.960365    8737 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 03:45:24.976711    8737 start.go:159] libmachine.API.Create for "bridge-811000" (driver="qemu2")
	I0610 03:45:24.976738    8737 client.go:168] LocalClient.Create starting
	I0610 03:45:24.976814    8737 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:45:24.976855    8737 main.go:141] libmachine: Decoding PEM data...
	I0610 03:45:24.976863    8737 main.go:141] libmachine: Parsing certificate...
	I0610 03:45:24.976896    8737 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:45:24.976922    8737 main.go:141] libmachine: Decoding PEM data...
	I0610 03:45:24.976928    8737 main.go:141] libmachine: Parsing certificate...
	I0610 03:45:24.977223    8737 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:45:25.122361    8737 main.go:141] libmachine: Creating SSH key...
	I0610 03:45:25.163040    8737 main.go:141] libmachine: Creating Disk image...
	I0610 03:45:25.163049    8737 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:45:25.163270    8737 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/bridge-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/bridge-811000/disk.qcow2
	I0610 03:45:25.177418    8737 main.go:141] libmachine: STDOUT: 
	I0610 03:45:25.177436    8737 main.go:141] libmachine: STDERR: 
	I0610 03:45:25.177521    8737 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/bridge-811000/disk.qcow2 +20000M
	I0610 03:45:25.190519    8737 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:45:25.190548    8737 main.go:141] libmachine: STDERR: 
	I0610 03:45:25.190561    8737 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/bridge-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/bridge-811000/disk.qcow2
	I0610 03:45:25.190567    8737 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:45:25.190603    8737 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/bridge-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/bridge-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/bridge-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:0b:e9:ec:64:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/bridge-811000/disk.qcow2
	I0610 03:45:25.192761    8737 main.go:141] libmachine: STDOUT: 
	I0610 03:45:25.192785    8737 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:45:25.192798    8737 client.go:171] duration metric: took 216.053417ms to LocalClient.Create
	I0610 03:45:27.195002    8737 start.go:128] duration metric: took 2.245802125s to createHost
	I0610 03:45:27.195086    8737 start.go:83] releasing machines lock for "bridge-811000", held for 2.245958333s
	W0610 03:45:27.195526    8737 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:45:27.210224    8737 out.go:177] 
	W0610 03:45:27.214162    8737 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:45:27.214196    8737 out.go:239] * 
	* 
	W0610 03:45:27.216913    8737 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:45:27.229195    8737 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-811000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.755303167s)

                                                
                                                
-- stdout --
	* [kubenet-811000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-811000" primary control-plane node in "kubenet-811000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-811000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:45:29.454019    8849 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:45:29.454164    8849 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:45:29.454168    8849 out.go:304] Setting ErrFile to fd 2...
	I0610 03:45:29.454170    8849 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:45:29.454311    8849 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:45:29.455463    8849 out.go:298] Setting JSON to false
	I0610 03:45:29.472325    8849 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6300,"bootTime":1718010029,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:45:29.472386    8849 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:45:29.478799    8849 out.go:177] * [kubenet-811000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:45:29.485807    8849 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:45:29.489806    8849 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:45:29.485875    8849 notify.go:220] Checking for updates...
	I0610 03:45:29.495734    8849 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:45:29.499703    8849 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:45:29.502716    8849 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:45:29.505776    8849 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:45:29.509187    8849 config.go:182] Loaded profile config "multinode-763000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:45:29.509263    8849 config.go:182] Loaded profile config "stopped-upgrade-390000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 03:45:29.509306    8849 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:45:29.513746    8849 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:45:29.520775    8849 start.go:297] selected driver: qemu2
	I0610 03:45:29.520780    8849 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:45:29.520785    8849 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:45:29.523108    8849 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:45:29.526764    8849 out.go:177] * Automatically selected the socket_vmnet network
	I0610 03:45:29.529855    8849 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:45:29.529908    8849 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0610 03:45:29.529931    8849 start.go:340] cluster config:
	{Name:kubenet-811000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:kubenet-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:45:29.534394    8849 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:45:29.541725    8849 out.go:177] * Starting "kubenet-811000" primary control-plane node in "kubenet-811000" cluster
	I0610 03:45:29.545788    8849 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:45:29.545803    8849 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:45:29.545812    8849 cache.go:56] Caching tarball of preloaded images
	I0610 03:45:29.545872    8849 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:45:29.545878    8849 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:45:29.545968    8849 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/kubenet-811000/config.json ...
	I0610 03:45:29.545983    8849 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/kubenet-811000/config.json: {Name:mke4989c2c6a0f970cd98f7affc49c0ad114e597 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:45:29.546281    8849 start.go:360] acquireMachinesLock for kubenet-811000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:45:29.546317    8849 start.go:364] duration metric: took 29.75µs to acquireMachinesLock for "kubenet-811000"
	I0610 03:45:29.546327    8849 start.go:93] Provisioning new machine with config: &{Name:kubenet-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kubenet-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:45:29.546355    8849 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:45:29.549736    8849 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 03:45:29.565270    8849 start.go:159] libmachine.API.Create for "kubenet-811000" (driver="qemu2")
	I0610 03:45:29.565298    8849 client.go:168] LocalClient.Create starting
	I0610 03:45:29.565350    8849 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:45:29.565397    8849 main.go:141] libmachine: Decoding PEM data...
	I0610 03:45:29.565409    8849 main.go:141] libmachine: Parsing certificate...
	I0610 03:45:29.565452    8849 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:45:29.565474    8849 main.go:141] libmachine: Decoding PEM data...
	I0610 03:45:29.565483    8849 main.go:141] libmachine: Parsing certificate...
	I0610 03:45:29.565881    8849 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:45:29.711191    8849 main.go:141] libmachine: Creating SSH key...
	I0610 03:45:29.799733    8849 main.go:141] libmachine: Creating Disk image...
	I0610 03:45:29.799740    8849 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:45:29.799927    8849 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubenet-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubenet-811000/disk.qcow2
	I0610 03:45:29.812678    8849 main.go:141] libmachine: STDOUT: 
	I0610 03:45:29.812698    8849 main.go:141] libmachine: STDERR: 
	I0610 03:45:29.812754    8849 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubenet-811000/disk.qcow2 +20000M
	I0610 03:45:29.823625    8849 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:45:29.823648    8849 main.go:141] libmachine: STDERR: 
	I0610 03:45:29.823668    8849 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubenet-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubenet-811000/disk.qcow2
	I0610 03:45:29.823673    8849 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:45:29.823702    8849 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubenet-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubenet-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubenet-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:96:21:b1:72:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubenet-811000/disk.qcow2
	I0610 03:45:29.825547    8849 main.go:141] libmachine: STDOUT: 
	I0610 03:45:29.825566    8849 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:45:29.825588    8849 client.go:171] duration metric: took 260.28025ms to LocalClient.Create
	I0610 03:45:31.827874    8849 start.go:128] duration metric: took 2.281460458s to createHost
	I0610 03:45:31.827957    8849 start.go:83] releasing machines lock for "kubenet-811000", held for 2.281607042s
	W0610 03:45:31.828031    8849 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:45:31.842507    8849 out.go:177] * Deleting "kubenet-811000" in qemu2 ...
	W0610 03:45:31.868227    8849 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:45:31.868259    8849 start.go:728] Will try again in 5 seconds ...
	I0610 03:45:36.870490    8849 start.go:360] acquireMachinesLock for kubenet-811000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:45:36.870962    8849 start.go:364] duration metric: took 395.25µs to acquireMachinesLock for "kubenet-811000"
	I0610 03:45:36.871084    8849 start.go:93] Provisioning new machine with config: &{Name:kubenet-811000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:kubenet-811000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:45:36.871406    8849 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:45:36.880230    8849 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 03:45:36.920288    8849 start.go:159] libmachine.API.Create for "kubenet-811000" (driver="qemu2")
	I0610 03:45:36.920339    8849 client.go:168] LocalClient.Create starting
	I0610 03:45:36.920456    8849 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:45:36.920519    8849 main.go:141] libmachine: Decoding PEM data...
	I0610 03:45:36.920534    8849 main.go:141] libmachine: Parsing certificate...
	I0610 03:45:36.920622    8849 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:45:36.920665    8849 main.go:141] libmachine: Decoding PEM data...
	I0610 03:45:36.920674    8849 main.go:141] libmachine: Parsing certificate...
	I0610 03:45:36.921185    8849 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:45:37.071572    8849 main.go:141] libmachine: Creating SSH key...
	I0610 03:45:37.113336    8849 main.go:141] libmachine: Creating Disk image...
	I0610 03:45:37.113341    8849 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:45:37.113529    8849 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubenet-811000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubenet-811000/disk.qcow2
	I0610 03:45:37.126136    8849 main.go:141] libmachine: STDOUT: 
	I0610 03:45:37.126162    8849 main.go:141] libmachine: STDERR: 
	I0610 03:45:37.126215    8849 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubenet-811000/disk.qcow2 +20000M
	I0610 03:45:37.137044    8849 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:45:37.137062    8849 main.go:141] libmachine: STDERR: 
	I0610 03:45:37.137073    8849 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubenet-811000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubenet-811000/disk.qcow2
	I0610 03:45:37.137077    8849 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:45:37.137107    8849 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubenet-811000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubenet-811000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubenet-811000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:64:52:e1:02:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/kubenet-811000/disk.qcow2
	I0610 03:45:37.138808    8849 main.go:141] libmachine: STDOUT: 
	I0610 03:45:37.138827    8849 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:45:37.138848    8849 client.go:171] duration metric: took 218.500458ms to LocalClient.Create
	I0610 03:45:39.141073    8849 start.go:128] duration metric: took 2.2696075s to createHost
	I0610 03:45:39.141218    8849 start.go:83] releasing machines lock for "kubenet-811000", held for 2.270148125s
	W0610 03:45:39.141505    8849 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-811000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:45:39.150256    8849 out.go:177] 
	W0610 03:45:39.156295    8849 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:45:39.156322    8849 out.go:239] * 
	* 
	W0610 03:45:39.158985    8849 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:45:39.167200    8849 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-407000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-407000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.077289042s)

                                                
                                                
-- stdout --
	* [old-k8s-version-407000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-407000" primary control-plane node in "old-k8s-version-407000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-407000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:45:41.573374    8963 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:45:41.573542    8963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:45:41.573546    8963 out.go:304] Setting ErrFile to fd 2...
	I0610 03:45:41.573548    8963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:45:41.573682    8963 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:45:41.574961    8963 out.go:298] Setting JSON to false
	I0610 03:45:41.593404    8963 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6312,"bootTime":1718010029,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:45:41.593479    8963 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:45:41.606285    8963 out.go:177] * [old-k8s-version-407000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:45:41.621307    8963 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:45:41.617378    8963 notify.go:220] Checking for updates...
	I0610 03:45:41.629287    8963 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:45:41.639278    8963 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:45:41.647277    8963 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:45:41.657307    8963 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:45:41.660329    8963 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:45:41.661979    8963 config.go:182] Loaded profile config "multinode-763000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:45:41.662047    8963 config.go:182] Loaded profile config "stopped-upgrade-390000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 03:45:41.662101    8963 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:45:41.666290    8963 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:45:41.670438    8963 start.go:297] selected driver: qemu2
	I0610 03:45:41.670445    8963 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:45:41.670453    8963 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:45:41.672811    8963 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:45:41.678276    8963 out.go:177] * Automatically selected the socket_vmnet network
	I0610 03:45:41.683334    8963 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:45:41.683363    8963 cni.go:84] Creating CNI manager for ""
	I0610 03:45:41.683371    8963 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0610 03:45:41.683400    8963 start.go:340] cluster config:
	{Name:old-k8s-version-407000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-407000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:45:41.687726    8963 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:45:41.696267    8963 out.go:177] * Starting "old-k8s-version-407000" primary control-plane node in "old-k8s-version-407000" cluster
	I0610 03:45:41.701351    8963 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 03:45:41.701374    8963 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0610 03:45:41.701382    8963 cache.go:56] Caching tarball of preloaded images
	I0610 03:45:41.701467    8963 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:45:41.701473    8963 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0610 03:45:41.701541    8963 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/old-k8s-version-407000/config.json ...
	I0610 03:45:41.701552    8963 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/old-k8s-version-407000/config.json: {Name:mkc4844f9dd27ab38c91f167556d26b0575e8484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:45:41.701807    8963 start.go:360] acquireMachinesLock for old-k8s-version-407000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:45:41.701847    8963 start.go:364] duration metric: took 30.333µs to acquireMachinesLock for "old-k8s-version-407000"
	I0610 03:45:41.701857    8963 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-407000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-407000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:45:41.701893    8963 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:45:41.706253    8963 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 03:45:41.722463    8963 start.go:159] libmachine.API.Create for "old-k8s-version-407000" (driver="qemu2")
	I0610 03:45:41.722493    8963 client.go:168] LocalClient.Create starting
	I0610 03:45:41.722558    8963 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:45:41.722587    8963 main.go:141] libmachine: Decoding PEM data...
	I0610 03:45:41.722602    8963 main.go:141] libmachine: Parsing certificate...
	I0610 03:45:41.722644    8963 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:45:41.722672    8963 main.go:141] libmachine: Decoding PEM data...
	I0610 03:45:41.722682    8963 main.go:141] libmachine: Parsing certificate...
	I0610 03:45:41.723148    8963 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:45:41.867980    8963 main.go:141] libmachine: Creating SSH key...
	I0610 03:45:42.219963    8963 main.go:141] libmachine: Creating Disk image...
	I0610 03:45:42.219979    8963 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:45:42.220277    8963 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/disk.qcow2
	I0610 03:45:42.233787    8963 main.go:141] libmachine: STDOUT: 
	I0610 03:45:42.233807    8963 main.go:141] libmachine: STDERR: 
	I0610 03:45:42.233868    8963 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/disk.qcow2 +20000M
	I0610 03:45:42.245026    8963 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:45:42.245041    8963 main.go:141] libmachine: STDERR: 
	I0610 03:45:42.245061    8963 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/disk.qcow2
	I0610 03:45:42.245066    8963 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:45:42.245098    8963 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:9e:cf:64:94:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/disk.qcow2
	I0610 03:45:42.246940    8963 main.go:141] libmachine: STDOUT: 
	I0610 03:45:42.246953    8963 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:45:42.246972    8963 client.go:171] duration metric: took 524.467208ms to LocalClient.Create
	I0610 03:45:44.249215    8963 start.go:128] duration metric: took 2.54726825s to createHost
	I0610 03:45:44.249305    8963 start.go:83] releasing machines lock for "old-k8s-version-407000", held for 2.547421333s
	W0610 03:45:44.249379    8963 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:45:44.262561    8963 out.go:177] * Deleting "old-k8s-version-407000" in qemu2 ...
	W0610 03:45:44.290827    8963 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:45:44.290860    8963 start.go:728] Will try again in 5 seconds ...
	I0610 03:45:49.293111    8963 start.go:360] acquireMachinesLock for old-k8s-version-407000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:45:49.293568    8963 start.go:364] duration metric: took 345.791µs to acquireMachinesLock for "old-k8s-version-407000"
	I0610 03:45:49.293684    8963 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-407000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-407000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:45:49.293861    8963 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:45:49.301514    8963 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 03:45:49.336915    8963 start.go:159] libmachine.API.Create for "old-k8s-version-407000" (driver="qemu2")
	I0610 03:45:49.336961    8963 client.go:168] LocalClient.Create starting
	I0610 03:45:49.337061    8963 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:45:49.337120    8963 main.go:141] libmachine: Decoding PEM data...
	I0610 03:45:49.337136    8963 main.go:141] libmachine: Parsing certificate...
	I0610 03:45:49.337187    8963 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:45:49.337228    8963 main.go:141] libmachine: Decoding PEM data...
	I0610 03:45:49.337239    8963 main.go:141] libmachine: Parsing certificate...
	I0610 03:45:49.337697    8963 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:45:49.487958    8963 main.go:141] libmachine: Creating SSH key...
	I0610 03:45:49.550779    8963 main.go:141] libmachine: Creating Disk image...
	I0610 03:45:49.550794    8963 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:45:49.550995    8963 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/disk.qcow2
	I0610 03:45:49.563562    8963 main.go:141] libmachine: STDOUT: 
	I0610 03:45:49.563584    8963 main.go:141] libmachine: STDERR: 
	I0610 03:45:49.563642    8963 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/disk.qcow2 +20000M
	I0610 03:45:49.574939    8963 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:45:49.574956    8963 main.go:141] libmachine: STDERR: 
	I0610 03:45:49.574971    8963 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/disk.qcow2
	I0610 03:45:49.574978    8963 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:45:49.575021    8963 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:5a:7b:1c:1a:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/disk.qcow2
	I0610 03:45:49.576824    8963 main.go:141] libmachine: STDOUT: 
	I0610 03:45:49.576839    8963 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:45:49.576852    8963 client.go:171] duration metric: took 239.884875ms to LocalClient.Create
	I0610 03:45:51.577357    8963 start.go:128] duration metric: took 2.283420333s to createHost
	I0610 03:45:51.583125    8963 start.go:83] releasing machines lock for "old-k8s-version-407000", held for 2.289511916s
	W0610 03:45:51.583365    8963 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-407000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-407000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:45:51.589951    8963 out.go:177] 
	W0610 03:45:51.596116    8963 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:45:51.596160    8963 out.go:239] * 
	* 
	W0610 03:45:51.597929    8963 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:45:51.607993    8963 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-407000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-407000 -n old-k8s-version-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-407000 -n old-k8s-version-407000: exit status 7 (58.37975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-407000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-407000 create -f testdata/busybox.yaml: exit status 1 (30.086292ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-407000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-407000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-407000 -n old-k8s-version-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-407000 -n old-k8s-version-407000: exit status 7 (29.646625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-407000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-407000 -n old-k8s-version-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-407000 -n old-k8s-version-407000: exit status 7 (29.908584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-407000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-407000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-407000 describe deploy/metrics-server -n kube-system: exit status 1 (27.871208ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-407000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-407000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-407000 -n old-k8s-version-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-407000 -n old-k8s-version-407000: exit status 7 (29.682208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-407000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-407000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.176456958s)

                                                
                                                
-- stdout --
	* [old-k8s-version-407000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-407000" primary control-plane node in "old-k8s-version-407000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-407000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-407000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:45:53.891655    9011 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:45:53.891795    9011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:45:53.891799    9011 out.go:304] Setting ErrFile to fd 2...
	I0610 03:45:53.891801    9011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:45:53.891956    9011 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:45:53.892998    9011 out.go:298] Setting JSON to false
	I0610 03:45:53.909648    9011 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6324,"bootTime":1718010029,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:45:53.909709    9011 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:45:53.914501    9011 out.go:177] * [old-k8s-version-407000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:45:53.920423    9011 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:45:53.923478    9011 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:45:53.920459    9011 notify.go:220] Checking for updates...
	I0610 03:45:53.930399    9011 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:45:53.933404    9011 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:45:53.936392    9011 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:45:53.939430    9011 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:45:53.942690    9011 config.go:182] Loaded profile config "old-k8s-version-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0610 03:45:53.944527    9011 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0610 03:45:53.947377    9011 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:45:53.951452    9011 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 03:45:53.957372    9011 start.go:297] selected driver: qemu2
	I0610 03:45:53.957377    9011 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-407000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-407000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:45:53.957444    9011 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:45:53.959839    9011 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:45:53.959880    9011 cni.go:84] Creating CNI manager for ""
	I0610 03:45:53.959887    9011 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0610 03:45:53.959907    9011 start.go:340] cluster config:
	{Name:old-k8s-version-407000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-407000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:45:53.964197    9011 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:45:53.972415    9011 out.go:177] * Starting "old-k8s-version-407000" primary control-plane node in "old-k8s-version-407000" cluster
	I0610 03:45:53.976427    9011 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 03:45:53.976441    9011 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0610 03:45:53.976451    9011 cache.go:56] Caching tarball of preloaded images
	I0610 03:45:53.976517    9011 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:45:53.976521    9011 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0610 03:45:53.976591    9011 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/old-k8s-version-407000/config.json ...
	I0610 03:45:53.977089    9011 start.go:360] acquireMachinesLock for old-k8s-version-407000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:45:53.977113    9011 start.go:364] duration metric: took 18.291µs to acquireMachinesLock for "old-k8s-version-407000"
	I0610 03:45:53.977120    9011 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:45:53.977125    9011 fix.go:54] fixHost starting: 
	I0610 03:45:53.977232    9011 fix.go:112] recreateIfNeeded on old-k8s-version-407000: state=Stopped err=<nil>
	W0610 03:45:53.977240    9011 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:45:53.980388    9011 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-407000" ...
	I0610 03:45:53.987539    9011 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:5a:7b:1c:1a:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/disk.qcow2
	I0610 03:45:53.989414    9011 main.go:141] libmachine: STDOUT: 
	I0610 03:45:53.989431    9011 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:45:53.989465    9011 fix.go:56] duration metric: took 12.337792ms for fixHost
	I0610 03:45:53.989469    9011 start.go:83] releasing machines lock for "old-k8s-version-407000", held for 12.352208ms
	W0610 03:45:53.989479    9011 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:45:53.989515    9011 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:45:53.989519    9011 start.go:728] Will try again in 5 seconds ...
	I0610 03:45:58.991694    9011 start.go:360] acquireMachinesLock for old-k8s-version-407000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:45:58.991898    9011 start.go:364] duration metric: took 152.75µs to acquireMachinesLock for "old-k8s-version-407000"
	I0610 03:45:58.991925    9011 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:45:58.991932    9011 fix.go:54] fixHost starting: 
	I0610 03:45:58.992153    9011 fix.go:112] recreateIfNeeded on old-k8s-version-407000: state=Stopped err=<nil>
	W0610 03:45:58.992162    9011 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:45:59.001175    9011 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-407000" ...
	I0610 03:45:59.004211    9011 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:5a:7b:1c:1a:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/old-k8s-version-407000/disk.qcow2
	I0610 03:45:59.007733    9011 main.go:141] libmachine: STDOUT: 
	I0610 03:45:59.007758    9011 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:45:59.007783    9011 fix.go:56] duration metric: took 15.851834ms for fixHost
	I0610 03:45:59.007789    9011 start.go:83] releasing machines lock for "old-k8s-version-407000", held for 15.883125ms
	W0610 03:45:59.007861    9011 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-407000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-407000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:45:59.014166    9011 out.go:177] 
	W0610 03:45:59.020241    9011 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:45:59.020250    9011 out.go:239] * 
	* 
	W0610 03:45:59.021013    9011 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:45:59.032142    9011 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-407000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-407000 -n old-k8s-version-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-407000 -n old-k8s-version-407000: exit status 7 (35.401ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-407000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-407000 -n old-k8s-version-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-407000 -n old-k8s-version-407000: exit status 7 (29.397584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-407000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-407000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-407000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.777042ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-407000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-407000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-407000 -n old-k8s-version-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-407000 -n old-k8s-version-407000: exit status 7 (28.362791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-407000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-407000 -n old-k8s-version-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-407000 -n old-k8s-version-407000: exit status 7 (29.83075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-407000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-407000 --alsologtostderr -v=1: exit status 83 (39.739292ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-407000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-407000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:45:59.259792    9034 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:45:59.260807    9034 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:45:59.260812    9034 out.go:304] Setting ErrFile to fd 2...
	I0610 03:45:59.260814    9034 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:45:59.260990    9034 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:45:59.261202    9034 out.go:298] Setting JSON to false
	I0610 03:45:59.261210    9034 mustload.go:65] Loading cluster: old-k8s-version-407000
	I0610 03:45:59.261391    9034 config.go:182] Loaded profile config "old-k8s-version-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0610 03:45:59.266134    9034 out.go:177] * The control-plane node old-k8s-version-407000 host is not running: state=Stopped
	I0610 03:45:59.267311    9034 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-407000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-407000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-407000 -n old-k8s-version-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-407000 -n old-k8s-version-407000: exit status 7 (29.046583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-407000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-407000 -n old-k8s-version-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-407000 -n old-k8s-version-407000: exit status 7 (28.697333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-407000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-394000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-394000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (9.858456792s)

                                                
                                                
-- stdout --
	* [no-preload-394000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-394000" primary control-plane node in "no-preload-394000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-394000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:45:59.720717    9057 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:45:59.720854    9057 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:45:59.720857    9057 out.go:304] Setting ErrFile to fd 2...
	I0610 03:45:59.720860    9057 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:45:59.721000    9057 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:45:59.722103    9057 out.go:298] Setting JSON to false
	I0610 03:45:59.738721    9057 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6330,"bootTime":1718010029,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:45:59.738783    9057 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:45:59.743282    9057 out.go:177] * [no-preload-394000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:45:59.750499    9057 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:45:59.753435    9057 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:45:59.750567    9057 notify.go:220] Checking for updates...
	I0610 03:45:59.760458    9057 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:45:59.763434    9057 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:45:59.766476    9057 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:45:59.769489    9057 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:45:59.771458    9057 config.go:182] Loaded profile config "multinode-763000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:45:59.771521    9057 config.go:182] Loaded profile config "stopped-upgrade-390000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0610 03:45:59.771565    9057 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:45:59.775467    9057 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:45:59.782291    9057 start.go:297] selected driver: qemu2
	I0610 03:45:59.782297    9057 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:45:59.782303    9057 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:45:59.784401    9057 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:45:59.787482    9057 out.go:177] * Automatically selected the socket_vmnet network
	I0610 03:45:59.790552    9057 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:45:59.790579    9057 cni.go:84] Creating CNI manager for ""
	I0610 03:45:59.790587    9057 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:45:59.790599    9057 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 03:45:59.790627    9057 start.go:340] cluster config:
	{Name:no-preload-394000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:45:59.795030    9057 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:45:59.803470    9057 out.go:177] * Starting "no-preload-394000" primary control-plane node in "no-preload-394000" cluster
	I0610 03:45:59.807504    9057 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:45:59.807602    9057 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/no-preload-394000/config.json ...
	I0610 03:45:59.807629    9057 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/no-preload-394000/config.json: {Name:mkea88e46c613ea4e085d84082d222088f539f87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:45:59.807637    9057 cache.go:107] acquiring lock: {Name:mkbb998c36a2212d49da3a6e16d0729d21134180 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:45:59.807648    9057 cache.go:107] acquiring lock: {Name:mke085d629c33c819d0db1acbfcf2a338c45baf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:45:59.807645    9057 cache.go:107] acquiring lock: {Name:mk26c01f691f66a90919812d6677798c79591196 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:45:59.807713    9057 cache.go:107] acquiring lock: {Name:mkf10c144f368593349e834b89555ca89bf6c5e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:45:59.807723    9057 cache.go:115] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0610 03:45:59.807730    9057 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 96.083µs
	I0610 03:45:59.807737    9057 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0610 03:45:59.807819    9057 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0610 03:45:59.807839    9057 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0610 03:45:59.807817    9057 cache.go:107] acquiring lock: {Name:mk6f2cd8a2f42c5fbb4f058f46df71e652fc5f23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:45:59.807862    9057 cache.go:107] acquiring lock: {Name:mke31be456ecb4b6714076615288e7606ed38646 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:45:59.807820    9057 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0610 03:45:59.807888    9057 start.go:360] acquireMachinesLock for no-preload-394000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:45:59.807882    9057 cache.go:107] acquiring lock: {Name:mke5d9c091c9a109c0a47b19cbbae97b123b4eec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:45:59.807908    9057 cache.go:107] acquiring lock: {Name:mkbab301c1e383be85cf5d2955826aac1445c6b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:45:59.807926    9057 start.go:364] duration metric: took 33.083µs to acquireMachinesLock for "no-preload-394000"
	I0610 03:45:59.808034    9057 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0610 03:45:59.808079    9057 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0610 03:45:59.807998    9057 start.go:93] Provisioning new machine with config: &{Name:no-preload-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:no-preload-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:45:59.808097    9057 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:45:59.808101    9057 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0610 03:45:59.815458    9057 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 03:45:59.808136    9057 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0610 03:45:59.818388    9057 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0610 03:45:59.818405    9057 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0610 03:45:59.818408    9057 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0610 03:45:59.818433    9057 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0610 03:45:59.818877    9057 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0610 03:45:59.820457    9057 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0610 03:45:59.820494    9057 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0610 03:45:59.832314    9057 start.go:159] libmachine.API.Create for "no-preload-394000" (driver="qemu2")
	I0610 03:45:59.832335    9057 client.go:168] LocalClient.Create starting
	I0610 03:45:59.832438    9057 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:45:59.832470    9057 main.go:141] libmachine: Decoding PEM data...
	I0610 03:45:59.832483    9057 main.go:141] libmachine: Parsing certificate...
	I0610 03:45:59.832523    9057 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:45:59.832546    9057 main.go:141] libmachine: Decoding PEM data...
	I0610 03:45:59.832552    9057 main.go:141] libmachine: Parsing certificate...
	I0610 03:45:59.832908    9057 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:45:59.983893    9057 main.go:141] libmachine: Creating SSH key...
	I0610 03:46:00.066778    9057 main.go:141] libmachine: Creating Disk image...
	I0610 03:46:00.066829    9057 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:46:00.067110    9057 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/disk.qcow2
	I0610 03:46:00.080011    9057 main.go:141] libmachine: STDOUT: 
	I0610 03:46:00.080041    9057 main.go:141] libmachine: STDERR: 
	I0610 03:46:00.080116    9057 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/disk.qcow2 +20000M
	I0610 03:46:00.092282    9057 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:46:00.092307    9057 main.go:141] libmachine: STDERR: 
	I0610 03:46:00.092325    9057 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/disk.qcow2
	I0610 03:46:00.092329    9057 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:46:00.092370    9057 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:2b:65:d3:41:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/disk.qcow2
	I0610 03:46:00.094369    9057 main.go:141] libmachine: STDOUT: 
	I0610 03:46:00.094395    9057 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:46:00.094416    9057 client.go:171] duration metric: took 262.072416ms to LocalClient.Create
	I0610 03:46:00.700150    9057 cache.go:162] opening:  /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1
	I0610 03:46:00.732803    9057 cache.go:162] opening:  /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1
	I0610 03:46:00.750995    9057 cache.go:162] opening:  /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0610 03:46:00.751451    9057 cache.go:162] opening:  /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0610 03:46:00.881931    9057 cache.go:162] opening:  /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1
	I0610 03:46:00.898463    9057 cache.go:162] opening:  /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0610 03:46:00.930954    9057 cache.go:157] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0610 03:46:00.930999    9057 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.123201667s
	I0610 03:46:00.931016    9057 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0610 03:46:00.939403    9057 cache.go:162] opening:  /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0610 03:46:02.094653    9057 start.go:128] duration metric: took 2.286499792s to createHost
	I0610 03:46:02.094710    9057 start.go:83] releasing machines lock for "no-preload-394000", held for 2.286689166s
	W0610 03:46:02.094815    9057 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:46:02.108661    9057 out.go:177] * Deleting "no-preload-394000" in qemu2 ...
	W0610 03:46:02.134239    9057 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:46:02.134265    9057 start.go:728] Will try again in 5 seconds ...
	I0610 03:46:03.397609    9057 cache.go:157] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0610 03:46:03.397633    9057 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.589772708s
	I0610 03:46:03.397644    9057 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0610 03:46:04.488454    9057 cache.go:157] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 exists
	I0610 03:46:04.488469    9057 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.1" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1" took 4.680639708s
	I0610 03:46:04.488477    9057 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.1 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 succeeded
	I0610 03:46:04.578226    9057 cache.go:157] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 exists
	I0610 03:46:04.578238    9057 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.1" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1" took 4.770558875s
	I0610 03:46:04.578245    9057 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.1 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 succeeded
	I0610 03:46:04.864849    9057 cache.go:157] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 exists
	I0610 03:46:04.864873    9057 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.1" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1" took 5.057187583s
	I0610 03:46:04.864883    9057 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.1 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 succeeded
	I0610 03:46:05.296102    9057 cache.go:157] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 exists
	I0610 03:46:05.296136    9057 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.1" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1" took 5.488284625s
	I0610 03:46:05.296152    9057 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.1 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 succeeded
	I0610 03:46:07.136091    9057 start.go:360] acquireMachinesLock for no-preload-394000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:46:07.136456    9057 start.go:364] duration metric: took 298.458µs to acquireMachinesLock for "no-preload-394000"
	I0610 03:46:07.136558    9057 start.go:93] Provisioning new machine with config: &{Name:no-preload-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:no-preload-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:46:07.136784    9057 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:46:07.147363    9057 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 03:46:07.196931    9057 start.go:159] libmachine.API.Create for "no-preload-394000" (driver="qemu2")
	I0610 03:46:07.197030    9057 client.go:168] LocalClient.Create starting
	I0610 03:46:07.197211    9057 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:46:07.197284    9057 main.go:141] libmachine: Decoding PEM data...
	I0610 03:46:07.197301    9057 main.go:141] libmachine: Parsing certificate...
	I0610 03:46:07.197384    9057 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:46:07.197429    9057 main.go:141] libmachine: Decoding PEM data...
	I0610 03:46:07.197448    9057 main.go:141] libmachine: Parsing certificate...
	I0610 03:46:07.197981    9057 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:46:07.350639    9057 main.go:141] libmachine: Creating SSH key...
	I0610 03:46:07.437851    9057 cache.go:157] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0610 03:46:07.437868    9057 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 7.630074292s
	I0610 03:46:07.437875    9057 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0610 03:46:07.437887    9057 cache.go:87] Successfully saved all images to host disk.
	I0610 03:46:07.484233    9057 main.go:141] libmachine: Creating Disk image...
	I0610 03:46:07.484239    9057 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:46:07.484427    9057 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/disk.qcow2
	I0610 03:46:07.497422    9057 main.go:141] libmachine: STDOUT: 
	I0610 03:46:07.497445    9057 main.go:141] libmachine: STDERR: 
	I0610 03:46:07.497514    9057 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/disk.qcow2 +20000M
	I0610 03:46:07.508570    9057 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:46:07.508588    9057 main.go:141] libmachine: STDERR: 
	I0610 03:46:07.508599    9057 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/disk.qcow2
	I0610 03:46:07.508606    9057 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:46:07.508662    9057 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:35:68:72:55:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/disk.qcow2
	I0610 03:46:07.510444    9057 main.go:141] libmachine: STDOUT: 
	I0610 03:46:07.510460    9057 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:46:07.510476    9057 client.go:171] duration metric: took 313.426291ms to LocalClient.Create
	I0610 03:46:09.512632    9057 start.go:128] duration metric: took 2.375786542s to createHost
	I0610 03:46:09.512667    9057 start.go:83] releasing machines lock for "no-preload-394000", held for 2.376169292s
	W0610 03:46:09.512847    9057 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-394000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:46:09.525191    9057 out.go:177] 
	W0610 03:46:09.530068    9057 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:46:09.530079    9057 out.go:239] * 
	* 
	W0610 03:46:09.531037    9057 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:46:09.543148    9057 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-394000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-394000 -n no-preload-394000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-394000 -n no-preload-394000: exit status 7 (38.046709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-394000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-394000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-394000 create -f testdata/busybox.yaml: exit status 1 (27.292542ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-394000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-394000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-394000 -n no-preload-394000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-394000 -n no-preload-394000: exit status 7 (29.850666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-394000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-394000 -n no-preload-394000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-394000 -n no-preload-394000: exit status 7 (28.97425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-394000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-394000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-394000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-394000 describe deploy/metrics-server -n kube-system: exit status 1 (27.744292ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-394000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-394000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-394000 -n no-preload-394000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-394000 -n no-preload-394000: exit status 7 (29.033125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-394000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-394000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-394000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (5.199512167s)

                                                
                                                
-- stdout --
	* [no-preload-394000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-394000" primary control-plane node in "no-preload-394000" cluster
	* Restarting existing qemu2 VM for "no-preload-394000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-394000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:46:13.032290    9137 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:46:13.032425    9137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:46:13.032428    9137 out.go:304] Setting ErrFile to fd 2...
	I0610 03:46:13.032431    9137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:46:13.032579    9137 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:46:13.033603    9137 out.go:298] Setting JSON to false
	I0610 03:46:13.050392    9137 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6344,"bootTime":1718010029,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:46:13.050466    9137 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:46:13.055125    9137 out.go:177] * [no-preload-394000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:46:13.063061    9137 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:46:13.063114    9137 notify.go:220] Checking for updates...
	I0610 03:46:13.071045    9137 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:46:13.078036    9137 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:46:13.082019    9137 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:46:13.085046    9137 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:46:13.088046    9137 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:46:13.091223    9137 config.go:182] Loaded profile config "no-preload-394000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:46:13.091459    9137 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:46:13.095044    9137 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 03:46:13.102063    9137 start.go:297] selected driver: qemu2
	I0610 03:46:13.102069    9137 start.go:901] validating driver "qemu2" against &{Name:no-preload-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:no-preload-394000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:46:13.102127    9137 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:46:13.104153    9137 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:46:13.104187    9137 cni.go:84] Creating CNI manager for ""
	I0610 03:46:13.104194    9137 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:46:13.104215    9137 start.go:340] cluster config:
	{Name:no-preload-394000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-394000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:46:13.108345    9137 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:46:13.117055    9137 out.go:177] * Starting "no-preload-394000" primary control-plane node in "no-preload-394000" cluster
	I0610 03:46:13.121079    9137 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:46:13.121166    9137 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/no-preload-394000/config.json ...
	I0610 03:46:13.121181    9137 cache.go:107] acquiring lock: {Name:mkbb998c36a2212d49da3a6e16d0729d21134180 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:46:13.121196    9137 cache.go:107] acquiring lock: {Name:mke085d629c33c819d0db1acbfcf2a338c45baf1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:46:13.121212    9137 cache.go:107] acquiring lock: {Name:mk6f2cd8a2f42c5fbb4f058f46df71e652fc5f23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:46:13.121249    9137 cache.go:115] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0610 03:46:13.121255    9137 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 79.584µs
	I0610 03:46:13.121260    9137 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0610 03:46:13.121266    9137 cache.go:107] acquiring lock: {Name:mk26c01f691f66a90919812d6677798c79591196 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:46:13.121281    9137 cache.go:107] acquiring lock: {Name:mkf10c144f368593349e834b89555ca89bf6c5e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:46:13.121276    9137 cache.go:107] acquiring lock: {Name:mke31be456ecb4b6714076615288e7606ed38646 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:46:13.121318    9137 cache.go:115] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0610 03:46:13.121322    9137 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 42µs
	I0610 03:46:13.121324    9137 cache.go:115] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 exists
	I0610 03:46:13.121270    9137 cache.go:115] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 exists
	I0610 03:46:13.121331    9137 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.1" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1" took 54.875µs
	I0610 03:46:13.121326    9137 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0610 03:46:13.121334    9137 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.1 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 succeeded
	I0610 03:46:13.121331    9137 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.1" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1" took 146.416µs
	I0610 03:46:13.121342    9137 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.1 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 succeeded
	I0610 03:46:13.121270    9137 cache.go:115] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 exists
	I0610 03:46:13.121347    9137 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.1" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1" took 162.334µs
	I0610 03:46:13.121350    9137 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.1 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 succeeded
	I0610 03:46:13.121367    9137 cache.go:115] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 exists
	I0610 03:46:13.121378    9137 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.1" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1" took 105.667µs
	I0610 03:46:13.121381    9137 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.1 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 succeeded
	I0610 03:46:13.121390    9137 cache.go:107] acquiring lock: {Name:mke5d9c091c9a109c0a47b19cbbae97b123b4eec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:46:13.121408    9137 cache.go:107] acquiring lock: {Name:mkbab301c1e383be85cf5d2955826aac1445c6b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:46:13.121439    9137 cache.go:115] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0610 03:46:13.121443    9137 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 88.958µs
	I0610 03:46:13.121448    9137 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0610 03:46:13.121456    9137 cache.go:115] /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0610 03:46:13.121459    9137 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 104µs
	I0610 03:46:13.121464    9137 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0610 03:46:13.121468    9137 cache.go:87] Successfully saved all images to host disk.
	I0610 03:46:13.121615    9137 start.go:360] acquireMachinesLock for no-preload-394000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:46:13.121646    9137 start.go:364] duration metric: took 24.5µs to acquireMachinesLock for "no-preload-394000"
	I0610 03:46:13.121654    9137 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:46:13.121660    9137 fix.go:54] fixHost starting: 
	I0610 03:46:13.121768    9137 fix.go:112] recreateIfNeeded on no-preload-394000: state=Stopped err=<nil>
	W0610 03:46:13.121776    9137 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:46:13.129019    9137 out.go:177] * Restarting existing qemu2 VM for "no-preload-394000" ...
	I0610 03:46:13.133022    9137 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:35:68:72:55:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/disk.qcow2
	I0610 03:46:13.134926    9137 main.go:141] libmachine: STDOUT: 
	I0610 03:46:13.134945    9137 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:46:13.134974    9137 fix.go:56] duration metric: took 13.312833ms for fixHost
	I0610 03:46:13.134978    9137 start.go:83] releasing machines lock for "no-preload-394000", held for 13.327584ms
	W0610 03:46:13.134984    9137 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:46:13.135012    9137 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:46:13.135016    9137 start.go:728] Will try again in 5 seconds ...
	I0610 03:46:18.136129    9137 start.go:360] acquireMachinesLock for no-preload-394000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:46:18.136567    9137 start.go:364] duration metric: took 341.917µs to acquireMachinesLock for "no-preload-394000"
	I0610 03:46:18.136700    9137 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:46:18.136723    9137 fix.go:54] fixHost starting: 
	I0610 03:46:18.137482    9137 fix.go:112] recreateIfNeeded on no-preload-394000: state=Stopped err=<nil>
	W0610 03:46:18.137510    9137 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:46:18.153140    9137 out.go:177] * Restarting existing qemu2 VM for "no-preload-394000" ...
	I0610 03:46:18.158258    9137 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:35:68:72:55:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/no-preload-394000/disk.qcow2
	I0610 03:46:18.167890    9137 main.go:141] libmachine: STDOUT: 
	I0610 03:46:18.167976    9137 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:46:18.168078    9137 fix.go:56] duration metric: took 31.354791ms for fixHost
	I0610 03:46:18.168099    9137 start.go:83] releasing machines lock for "no-preload-394000", held for 31.507042ms
	W0610 03:46:18.168334    9137 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-394000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-394000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:46:18.176848    9137 out.go:177] 
	W0610 03:46:18.180093    9137 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:46:18.180119    9137 out.go:239] * 
	* 
	W0610 03:46:18.182545    9137 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:46:18.189997    9137 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-394000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-394000 -n no-preload-394000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-394000 -n no-preload-394000: exit status 7 (70.821334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-394000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-543000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-543000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (10.257920167s)

                                                
                                                
-- stdout --
	* [embed-certs-543000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-543000" primary control-plane node in "embed-certs-543000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-543000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:46:13.816570    9148 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:46:13.816692    9148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:46:13.816695    9148 out.go:304] Setting ErrFile to fd 2...
	I0610 03:46:13.816697    9148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:46:13.816822    9148 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:46:13.817885    9148 out.go:298] Setting JSON to false
	I0610 03:46:13.834339    9148 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6344,"bootTime":1718010029,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:46:13.834395    9148 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:46:13.838343    9148 out.go:177] * [embed-certs-543000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:46:13.845365    9148 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:46:13.848276    9148 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:46:13.845431    9148 notify.go:220] Checking for updates...
	I0610 03:46:13.855367    9148 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:46:13.858279    9148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:46:13.861325    9148 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:46:13.864353    9148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:46:13.867603    9148 config.go:182] Loaded profile config "multinode-763000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:46:13.867679    9148 config.go:182] Loaded profile config "no-preload-394000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:46:13.867724    9148 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:46:13.872322    9148 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:46:13.879341    9148 start.go:297] selected driver: qemu2
	I0610 03:46:13.879348    9148 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:46:13.879354    9148 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:46:13.881632    9148 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:46:13.886328    9148 out.go:177] * Automatically selected the socket_vmnet network
	I0610 03:46:13.889461    9148 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:46:13.889477    9148 cni.go:84] Creating CNI manager for ""
	I0610 03:46:13.889484    9148 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:46:13.889489    9148 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 03:46:13.889517    9148 start.go:340] cluster config:
	{Name:embed-certs-543000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:embed-certs-543000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:46:13.894247    9148 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:46:13.901336    9148 out.go:177] * Starting "embed-certs-543000" primary control-plane node in "embed-certs-543000" cluster
	I0610 03:46:13.905164    9148 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:46:13.905180    9148 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:46:13.905191    9148 cache.go:56] Caching tarball of preloaded images
	I0610 03:46:13.905272    9148 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:46:13.905285    9148 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:46:13.905358    9148 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/embed-certs-543000/config.json ...
	I0610 03:46:13.905369    9148 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/embed-certs-543000/config.json: {Name:mk4960e3242966c2c93e1c7cafcc65eb05abce0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:46:13.905596    9148 start.go:360] acquireMachinesLock for embed-certs-543000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:46:13.905632    9148 start.go:364] duration metric: took 29.792µs to acquireMachinesLock for "embed-certs-543000"
	I0610 03:46:13.905643    9148 start.go:93] Provisioning new machine with config: &{Name:embed-certs-543000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:embed-certs-543000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:46:13.905671    9148 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:46:13.913295    9148 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 03:46:13.930945    9148 start.go:159] libmachine.API.Create for "embed-certs-543000" (driver="qemu2")
	I0610 03:46:13.930974    9148 client.go:168] LocalClient.Create starting
	I0610 03:46:13.931042    9148 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:46:13.931077    9148 main.go:141] libmachine: Decoding PEM data...
	I0610 03:46:13.931090    9148 main.go:141] libmachine: Parsing certificate...
	I0610 03:46:13.931137    9148 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:46:13.931164    9148 main.go:141] libmachine: Decoding PEM data...
	I0610 03:46:13.931178    9148 main.go:141] libmachine: Parsing certificate...
	I0610 03:46:13.931673    9148 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:46:14.074481    9148 main.go:141] libmachine: Creating SSH key...
	I0610 03:46:14.156312    9148 main.go:141] libmachine: Creating Disk image...
	I0610 03:46:14.156319    9148 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:46:14.156491    9148 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/disk.qcow2
	I0610 03:46:14.169500    9148 main.go:141] libmachine: STDOUT: 
	I0610 03:46:14.169524    9148 main.go:141] libmachine: STDERR: 
	I0610 03:46:14.169593    9148 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/disk.qcow2 +20000M
	I0610 03:46:14.180768    9148 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:46:14.180790    9148 main.go:141] libmachine: STDERR: 
	I0610 03:46:14.180806    9148 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/disk.qcow2
	I0610 03:46:14.180811    9148 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:46:14.180844    9148 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:e8:61:8e:85:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/disk.qcow2
	I0610 03:46:14.182534    9148 main.go:141] libmachine: STDOUT: 
	I0610 03:46:14.182549    9148 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:46:14.182566    9148 client.go:171] duration metric: took 251.5835ms to LocalClient.Create
	I0610 03:46:16.184840    9148 start.go:128] duration metric: took 2.279123625s to createHost
	I0610 03:46:16.184895    9148 start.go:83] releasing machines lock for "embed-certs-543000", held for 2.279228959s
	W0610 03:46:16.184997    9148 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:46:16.198268    9148 out.go:177] * Deleting "embed-certs-543000" in qemu2 ...
	W0610 03:46:16.224691    9148 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:46:16.224719    9148 start.go:728] Will try again in 5 seconds ...
	I0610 03:46:21.227000    9148 start.go:360] acquireMachinesLock for embed-certs-543000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:46:21.650635    9148 start.go:364] duration metric: took 423.458583ms to acquireMachinesLock for "embed-certs-543000"
	I0610 03:46:21.650777    9148 start.go:93] Provisioning new machine with config: &{Name:embed-certs-543000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:embed-certs-543000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:46:21.651014    9148 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:46:21.666620    9148 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 03:46:21.715147    9148 start.go:159] libmachine.API.Create for "embed-certs-543000" (driver="qemu2")
	I0610 03:46:21.715214    9148 client.go:168] LocalClient.Create starting
	I0610 03:46:21.715335    9148 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:46:21.715401    9148 main.go:141] libmachine: Decoding PEM data...
	I0610 03:46:21.715420    9148 main.go:141] libmachine: Parsing certificate...
	I0610 03:46:21.715482    9148 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:46:21.715529    9148 main.go:141] libmachine: Decoding PEM data...
	I0610 03:46:21.715543    9148 main.go:141] libmachine: Parsing certificate...
	I0610 03:46:21.716052    9148 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:46:21.871081    9148 main.go:141] libmachine: Creating SSH key...
	I0610 03:46:21.961721    9148 main.go:141] libmachine: Creating Disk image...
	I0610 03:46:21.961726    9148 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:46:21.961902    9148 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/disk.qcow2
	I0610 03:46:21.974655    9148 main.go:141] libmachine: STDOUT: 
	I0610 03:46:21.974679    9148 main.go:141] libmachine: STDERR: 
	I0610 03:46:21.974739    9148 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/disk.qcow2 +20000M
	I0610 03:46:21.985818    9148 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:46:21.985833    9148 main.go:141] libmachine: STDERR: 
	I0610 03:46:21.985843    9148 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/disk.qcow2
	I0610 03:46:21.985848    9148 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:46:21.985882    9148 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:29:5d:34:f0:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/disk.qcow2
	I0610 03:46:21.987579    9148 main.go:141] libmachine: STDOUT: 
	I0610 03:46:21.987597    9148 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:46:21.987610    9148 client.go:171] duration metric: took 272.386833ms to LocalClient.Create
	I0610 03:46:23.989859    9148 start.go:128] duration metric: took 2.338770084s to createHost
	I0610 03:46:23.989902    9148 start.go:83] releasing machines lock for "embed-certs-543000", held for 2.339207166s
	W0610 03:46:23.990231    9148 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-543000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-543000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:46:24.008787    9148 out.go:177] 
	W0610 03:46:24.017815    9148 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:46:24.017842    9148 out.go:239] * 
	* 
	W0610 03:46:24.020046    9148 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:46:24.030760    9148 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-543000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-543000 -n embed-certs-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-543000 -n embed-certs-543000: exit status 7 (65.602375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-394000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-394000 -n no-preload-394000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-394000 -n no-preload-394000: exit status 7 (32.275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-394000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-394000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-394000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-394000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.616ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-394000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-394000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-394000 -n no-preload-394000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-394000 -n no-preload-394000: exit status 7 (28.299333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-394000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-394000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-394000 -n no-preload-394000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-394000 -n no-preload-394000: exit status 7 (28.481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-394000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-394000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-394000 --alsologtostderr -v=1: exit status 83 (40.956125ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-394000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-394000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:46:18.459234    9172 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:46:18.459392    9172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:46:18.459395    9172 out.go:304] Setting ErrFile to fd 2...
	I0610 03:46:18.459398    9172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:46:18.459538    9172 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:46:18.459738    9172 out.go:298] Setting JSON to false
	I0610 03:46:18.459747    9172 mustload.go:65] Loading cluster: no-preload-394000
	I0610 03:46:18.459942    9172 config.go:182] Loaded profile config "no-preload-394000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:46:18.464035    9172 out.go:177] * The control-plane node no-preload-394000 host is not running: state=Stopped
	I0610 03:46:18.467845    9172 out.go:177]   To start a cluster, run: "minikube start -p no-preload-394000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-394000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-394000 -n no-preload-394000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-394000 -n no-preload-394000: exit status 7 (29.621125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-394000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-394000 -n no-preload-394000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-394000 -n no-preload-394000: exit status 7 (28.618666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-394000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-310000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-310000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (9.926250125s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-310000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-310000" primary control-plane node in "default-k8s-diff-port-310000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-310000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:46:19.145879    9207 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:46:19.146019    9207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:46:19.146023    9207 out.go:304] Setting ErrFile to fd 2...
	I0610 03:46:19.146025    9207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:46:19.146137    9207 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:46:19.147248    9207 out.go:298] Setting JSON to false
	I0610 03:46:19.163541    9207 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6350,"bootTime":1718010029,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:46:19.163610    9207 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:46:19.177257    9207 out.go:177] * [default-k8s-diff-port-310000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:46:19.182296    9207 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:46:19.183908    9207 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:46:19.182341    9207 notify.go:220] Checking for updates...
	I0610 03:46:19.187231    9207 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:46:19.190266    9207 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:46:19.193477    9207 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:46:19.196250    9207 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:46:19.199619    9207 config.go:182] Loaded profile config "embed-certs-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:46:19.199679    9207 config.go:182] Loaded profile config "multinode-763000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:46:19.199726    9207 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:46:19.204258    9207 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:46:19.211261    9207 start.go:297] selected driver: qemu2
	I0610 03:46:19.211268    9207 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:46:19.211275    9207 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:46:19.213598    9207 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:46:19.217260    9207 out.go:177] * Automatically selected the socket_vmnet network
	I0610 03:46:19.220287    9207 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:46:19.220331    9207 cni.go:84] Creating CNI manager for ""
	I0610 03:46:19.220338    9207 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:46:19.220342    9207 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 03:46:19.220381    9207 start.go:340] cluster config:
	{Name:default-k8s-diff-port-310000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-310000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:46:19.224871    9207 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:46:19.232074    9207 out.go:177] * Starting "default-k8s-diff-port-310000" primary control-plane node in "default-k8s-diff-port-310000" cluster
	I0610 03:46:19.236234    9207 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:46:19.236247    9207 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:46:19.236255    9207 cache.go:56] Caching tarball of preloaded images
	I0610 03:46:19.236308    9207 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:46:19.236314    9207 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:46:19.236379    9207 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/default-k8s-diff-port-310000/config.json ...
	I0610 03:46:19.236389    9207 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/default-k8s-diff-port-310000/config.json: {Name:mk9c9bc9f81fe24abed1b6f946c15d134e5e3528 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:46:19.236612    9207 start.go:360] acquireMachinesLock for default-k8s-diff-port-310000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:46:19.236650    9207 start.go:364] duration metric: took 29.541µs to acquireMachinesLock for "default-k8s-diff-port-310000"
	I0610 03:46:19.236662    9207 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-310000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-310000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:46:19.236697    9207 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:46:19.243229    9207 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 03:46:19.261338    9207 start.go:159] libmachine.API.Create for "default-k8s-diff-port-310000" (driver="qemu2")
	I0610 03:46:19.261369    9207 client.go:168] LocalClient.Create starting
	I0610 03:46:19.261437    9207 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:46:19.261469    9207 main.go:141] libmachine: Decoding PEM data...
	I0610 03:46:19.261482    9207 main.go:141] libmachine: Parsing certificate...
	I0610 03:46:19.261525    9207 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:46:19.261549    9207 main.go:141] libmachine: Decoding PEM data...
	I0610 03:46:19.261555    9207 main.go:141] libmachine: Parsing certificate...
	I0610 03:46:19.262013    9207 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:46:19.404504    9207 main.go:141] libmachine: Creating SSH key...
	I0610 03:46:19.622253    9207 main.go:141] libmachine: Creating Disk image...
	I0610 03:46:19.622260    9207 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:46:19.622468    9207 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/disk.qcow2
	I0610 03:46:19.635507    9207 main.go:141] libmachine: STDOUT: 
	I0610 03:46:19.635522    9207 main.go:141] libmachine: STDERR: 
	I0610 03:46:19.635590    9207 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/disk.qcow2 +20000M
	I0610 03:46:19.646404    9207 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:46:19.646419    9207 main.go:141] libmachine: STDERR: 
	I0610 03:46:19.646441    9207 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/disk.qcow2
	I0610 03:46:19.646447    9207 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:46:19.646473    9207 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:3e:6a:b0:a5:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/disk.qcow2
	I0610 03:46:19.648174    9207 main.go:141] libmachine: STDOUT: 
	I0610 03:46:19.648192    9207 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:46:19.648218    9207 client.go:171] duration metric: took 386.838ms to LocalClient.Create
	I0610 03:46:21.650426    9207 start.go:128] duration metric: took 2.413683s to createHost
	I0610 03:46:21.650481    9207 start.go:83] releasing machines lock for "default-k8s-diff-port-310000", held for 2.413794291s
	W0610 03:46:21.650582    9207 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:46:21.678694    9207 out.go:177] * Deleting "default-k8s-diff-port-310000" in qemu2 ...
	W0610 03:46:21.699367    9207 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:46:21.699388    9207 start.go:728] Will try again in 5 seconds ...
	I0610 03:46:26.701668    9207 start.go:360] acquireMachinesLock for default-k8s-diff-port-310000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:46:26.702179    9207 start.go:364] duration metric: took 400.041µs to acquireMachinesLock for "default-k8s-diff-port-310000"
	I0610 03:46:26.702251    9207 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-310000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-310000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:46:26.702543    9207 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:46:26.710211    9207 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 03:46:26.759504    9207 start.go:159] libmachine.API.Create for "default-k8s-diff-port-310000" (driver="qemu2")
	I0610 03:46:26.759569    9207 client.go:168] LocalClient.Create starting
	I0610 03:46:26.759676    9207 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:46:26.759723    9207 main.go:141] libmachine: Decoding PEM data...
	I0610 03:46:26.759742    9207 main.go:141] libmachine: Parsing certificate...
	I0610 03:46:26.759810    9207 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:46:26.759846    9207 main.go:141] libmachine: Decoding PEM data...
	I0610 03:46:26.759860    9207 main.go:141] libmachine: Parsing certificate...
	I0610 03:46:26.760505    9207 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:46:26.916051    9207 main.go:141] libmachine: Creating SSH key...
	I0610 03:46:26.970181    9207 main.go:141] libmachine: Creating Disk image...
	I0610 03:46:26.970186    9207 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:46:26.970372    9207 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/disk.qcow2
	I0610 03:46:26.983004    9207 main.go:141] libmachine: STDOUT: 
	I0610 03:46:26.983024    9207 main.go:141] libmachine: STDERR: 
	I0610 03:46:26.983087    9207 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/disk.qcow2 +20000M
	I0610 03:46:26.993936    9207 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:46:26.993971    9207 main.go:141] libmachine: STDERR: 
	I0610 03:46:26.993985    9207 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/disk.qcow2
	I0610 03:46:26.993990    9207 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:46:26.994029    9207 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:85:45:a2:7e:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/disk.qcow2
	I0610 03:46:26.995812    9207 main.go:141] libmachine: STDOUT: 
	I0610 03:46:26.995830    9207 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:46:26.995844    9207 client.go:171] duration metric: took 236.267958ms to LocalClient.Create
	I0610 03:46:28.998134    9207 start.go:128] duration metric: took 2.295521708s to createHost
	I0610 03:46:28.998207    9207 start.go:83] releasing machines lock for "default-k8s-diff-port-310000", held for 2.295977584s
	W0610 03:46:28.998539    9207 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-310000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-310000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:46:29.008189    9207 out.go:177] 
	W0610 03:46:29.020227    9207 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:46:29.020274    9207 out.go:239] * 
	* 
	W0610 03:46:29.022800    9207 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:46:29.031144    9207 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-310000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-310000 -n default-k8s-diff-port-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-310000 -n default-k8s-diff-port-310000: exit status 7 (64.240791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-543000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-543000 create -f testdata/busybox.yaml: exit status 1 (30.374542ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-543000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-543000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-543000 -n embed-certs-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-543000 -n embed-certs-543000: exit status 7 (29.269917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-543000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-543000 -n embed-certs-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-543000 -n embed-certs-543000: exit status 7 (29.114458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-543000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-543000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-543000 describe deploy/metrics-server -n kube-system: exit status 1 (26.901208ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-543000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-543000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-543000 -n embed-certs-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-543000 -n embed-certs-543000: exit status 7 (28.711417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-543000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-543000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (6.227469958s)

                                                
                                                
-- stdout --
	* [embed-certs-543000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-543000" primary control-plane node in "embed-certs-543000" cluster
	* Restarting existing qemu2 VM for "embed-certs-543000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-543000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:46:27.899927    9263 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:46:27.900055    9263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:46:27.900058    9263 out.go:304] Setting ErrFile to fd 2...
	I0610 03:46:27.900061    9263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:46:27.900194    9263 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:46:27.901178    9263 out.go:298] Setting JSON to false
	I0610 03:46:27.917241    9263 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6358,"bootTime":1718010029,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:46:27.917309    9263 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:46:27.922469    9263 out.go:177] * [embed-certs-543000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:46:27.929448    9263 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:46:27.933292    9263 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:46:27.929525    9263 notify.go:220] Checking for updates...
	I0610 03:46:27.940353    9263 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:46:27.941896    9263 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:46:27.945387    9263 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:46:27.948378    9263 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:46:27.951746    9263 config.go:182] Loaded profile config "embed-certs-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:46:27.952004    9263 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:46:27.956377    9263 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 03:46:27.963411    9263 start.go:297] selected driver: qemu2
	I0610 03:46:27.963418    9263 start.go:901] validating driver "qemu2" against &{Name:embed-certs-543000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:embed-certs-543000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:46:27.963480    9263 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:46:27.965638    9263 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:46:27.965674    9263 cni.go:84] Creating CNI manager for ""
	I0610 03:46:27.965682    9263 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:46:27.965710    9263 start.go:340] cluster config:
	{Name:embed-certs-543000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:embed-certs-543000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:46:27.970051    9263 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:46:27.977468    9263 out.go:177] * Starting "embed-certs-543000" primary control-plane node in "embed-certs-543000" cluster
	I0610 03:46:27.981398    9263 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:46:27.981414    9263 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:46:27.981428    9263 cache.go:56] Caching tarball of preloaded images
	I0610 03:46:27.981498    9263 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:46:27.981503    9263 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:46:27.981565    9263 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/embed-certs-543000/config.json ...
	I0610 03:46:27.982087    9263 start.go:360] acquireMachinesLock for embed-certs-543000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:46:28.998353    9263 start.go:364] duration metric: took 1.016225542s to acquireMachinesLock for "embed-certs-543000"
	I0610 03:46:28.998548    9263 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:46:28.998592    9263 fix.go:54] fixHost starting: 
	I0610 03:46:28.999338    9263 fix.go:112] recreateIfNeeded on embed-certs-543000: state=Stopped err=<nil>
	W0610 03:46:28.999379    9263 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:46:29.017210    9263 out.go:177] * Restarting existing qemu2 VM for "embed-certs-543000" ...
	I0610 03:46:29.024250    9263 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:29:5d:34:f0:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/disk.qcow2
	I0610 03:46:29.034703    9263 main.go:141] libmachine: STDOUT: 
	I0610 03:46:29.034796    9263 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:46:29.035002    9263 fix.go:56] duration metric: took 36.390584ms for fixHost
	I0610 03:46:29.035040    9263 start.go:83] releasing machines lock for "embed-certs-543000", held for 36.646958ms
	W0610 03:46:29.035109    9263 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:46:29.035289    9263 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:46:29.035305    9263 start.go:728] Will try again in 5 seconds ...
	I0610 03:46:34.037609    9263 start.go:360] acquireMachinesLock for embed-certs-543000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:46:34.038065    9263 start.go:364] duration metric: took 343.416µs to acquireMachinesLock for "embed-certs-543000"
	I0610 03:46:34.038218    9263 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:46:34.038238    9263 fix.go:54] fixHost starting: 
	I0610 03:46:34.038998    9263 fix.go:112] recreateIfNeeded on embed-certs-543000: state=Stopped err=<nil>
	W0610 03:46:34.039027    9263 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:46:34.049631    9263 out.go:177] * Restarting existing qemu2 VM for "embed-certs-543000" ...
	I0610 03:46:34.054951    9263 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:29:5d:34:f0:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/embed-certs-543000/disk.qcow2
	I0610 03:46:34.064523    9263 main.go:141] libmachine: STDOUT: 
	I0610 03:46:34.064605    9263 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:46:34.064740    9263 fix.go:56] duration metric: took 26.502167ms for fixHost
	I0610 03:46:34.064764    9263 start.go:83] releasing machines lock for "embed-certs-543000", held for 26.676542ms
	W0610 03:46:34.065012    9263 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-543000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-543000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:46:34.071743    9263 out.go:177] 
	W0610 03:46:34.074864    9263 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:46:34.074886    9263 out.go:239] * 
	* 
	W0610 03:46:34.077699    9263 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:46:34.085683    9263 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-543000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-543000 -n embed-certs-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-543000 -n embed-certs-543000: exit status 7 (73.088791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-310000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-310000 create -f testdata/busybox.yaml: exit status 1 (29.994542ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-310000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-310000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-310000 -n default-k8s-diff-port-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-310000 -n default-k8s-diff-port-310000: exit status 7 (29.348042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-310000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-310000 -n default-k8s-diff-port-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-310000 -n default-k8s-diff-port-310000: exit status 7 (29.087625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-310000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-310000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-310000 describe deploy/metrics-server -n kube-system: exit status 1 (26.476917ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-310000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-310000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-310000 -n default-k8s-diff-port-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-310000 -n default-k8s-diff-port-310000: exit status 7 (29.465667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-310000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-310000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (5.7971505s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-310000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-310000" primary control-plane node in "default-k8s-diff-port-310000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-310000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-310000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:46:31.452868    9298 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:46:31.453008    9298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:46:31.453011    9298 out.go:304] Setting ErrFile to fd 2...
	I0610 03:46:31.453014    9298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:46:31.453149    9298 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:46:31.454129    9298 out.go:298] Setting JSON to false
	I0610 03:46:31.470405    9298 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6362,"bootTime":1718010029,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:46:31.470469    9298 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:46:31.475321    9298 out.go:177] * [default-k8s-diff-port-310000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:46:31.482190    9298 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:46:31.482243    9298 notify.go:220] Checking for updates...
	I0610 03:46:31.490263    9298 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:46:31.493307    9298 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:46:31.496321    9298 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:46:31.499256    9298 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:46:31.502200    9298 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:46:31.505565    9298 config.go:182] Loaded profile config "default-k8s-diff-port-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:46:31.505835    9298 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:46:31.510287    9298 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 03:46:31.517247    9298 start.go:297] selected driver: qemu2
	I0610 03:46:31.517255    9298 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-310000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-310000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:46:31.517327    9298 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:46:31.519693    9298 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 03:46:31.519733    9298 cni.go:84] Creating CNI manager for ""
	I0610 03:46:31.519740    9298 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:46:31.519771    9298 start.go:340] cluster config:
	{Name:default-k8s-diff-port-310000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-310000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:46:31.524114    9298 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:46:31.529261    9298 out.go:177] * Starting "default-k8s-diff-port-310000" primary control-plane node in "default-k8s-diff-port-310000" cluster
	I0610 03:46:31.533257    9298 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:46:31.533275    9298 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:46:31.533288    9298 cache.go:56] Caching tarball of preloaded images
	I0610 03:46:31.533340    9298 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:46:31.533345    9298 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:46:31.533409    9298 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/default-k8s-diff-port-310000/config.json ...
	I0610 03:46:31.533920    9298 start.go:360] acquireMachinesLock for default-k8s-diff-port-310000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:46:31.533952    9298 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "default-k8s-diff-port-310000"
	I0610 03:46:31.533961    9298 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:46:31.533966    9298 fix.go:54] fixHost starting: 
	I0610 03:46:31.534089    9298 fix.go:112] recreateIfNeeded on default-k8s-diff-port-310000: state=Stopped err=<nil>
	W0610 03:46:31.534100    9298 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:46:31.537258    9298 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-310000" ...
	I0610 03:46:31.544222    9298 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:85:45:a2:7e:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/disk.qcow2
	I0610 03:46:31.546114    9298 main.go:141] libmachine: STDOUT: 
	I0610 03:46:31.546132    9298 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:46:31.546169    9298 fix.go:56] duration metric: took 12.200791ms for fixHost
	I0610 03:46:31.546173    9298 start.go:83] releasing machines lock for "default-k8s-diff-port-310000", held for 12.215708ms
	W0610 03:46:31.546179    9298 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:46:31.546208    9298 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:46:31.546213    9298 start.go:728] Will try again in 5 seconds ...
	I0610 03:46:36.548464    9298 start.go:360] acquireMachinesLock for default-k8s-diff-port-310000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:46:37.144948    9298 start.go:364] duration metric: took 596.379458ms to acquireMachinesLock for "default-k8s-diff-port-310000"
	I0610 03:46:37.145085    9298 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:46:37.145116    9298 fix.go:54] fixHost starting: 
	I0610 03:46:37.145876    9298 fix.go:112] recreateIfNeeded on default-k8s-diff-port-310000: state=Stopped err=<nil>
	W0610 03:46:37.145903    9298 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:46:37.154347    9298 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-310000" ...
	I0610 03:46:37.169683    9298 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:85:45:a2:7e:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/default-k8s-diff-port-310000/disk.qcow2
	I0610 03:46:37.179973    9298 main.go:141] libmachine: STDOUT: 
	I0610 03:46:37.180074    9298 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:46:37.180164    9298 fix.go:56] duration metric: took 35.049583ms for fixHost
	I0610 03:46:37.180182    9298 start.go:83] releasing machines lock for "default-k8s-diff-port-310000", held for 35.201041ms
	W0610 03:46:37.180385    9298 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-310000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-310000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:46:37.189419    9298 out.go:177] 
	W0610 03:46:37.194578    9298 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:46:37.194600    9298 out.go:239] * 
	* 
	W0610 03:46:37.196394    9298 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:46:37.208567    9298 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-310000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-310000 -n default-k8s-diff-port-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-310000 -n default-k8s-diff-port-310000: exit status 7 (58.864208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-543000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-543000 -n embed-certs-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-543000 -n embed-certs-543000: exit status 7 (34.262375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-543000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-543000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-543000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.987125ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-543000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-543000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-543000 -n embed-certs-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-543000 -n embed-certs-543000: exit status 7 (28.56625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-543000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-543000 -n embed-certs-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-543000 -n embed-certs-543000: exit status 7 (29.192375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-543000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-543000 --alsologtostderr -v=1: exit status 83 (40.202375ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-543000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-543000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:46:34.361238    9317 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:46:34.361398    9317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:46:34.361402    9317 out.go:304] Setting ErrFile to fd 2...
	I0610 03:46:34.361404    9317 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:46:34.361535    9317 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:46:34.361758    9317 out.go:298] Setting JSON to false
	I0610 03:46:34.361767    9317 mustload.go:65] Loading cluster: embed-certs-543000
	I0610 03:46:34.361957    9317 config.go:182] Loaded profile config "embed-certs-543000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:46:34.366354    9317 out.go:177] * The control-plane node embed-certs-543000 host is not running: state=Stopped
	I0610 03:46:34.370286    9317 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-543000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-543000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-543000 -n embed-certs-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-543000 -n embed-certs-543000: exit status 7 (28.791292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-543000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-543000 -n embed-certs-543000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-543000 -n embed-certs-543000: exit status 7 (29.33ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-543000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-224000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-224000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (9.782316666s)

                                                
                                                
-- stdout --
	* [newest-cni-224000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-224000" primary control-plane node in "newest-cni-224000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-224000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:46:34.823501    9340 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:46:34.823636    9340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:46:34.823639    9340 out.go:304] Setting ErrFile to fd 2...
	I0610 03:46:34.823641    9340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:46:34.823799    9340 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:46:34.824896    9340 out.go:298] Setting JSON to false
	I0610 03:46:34.842051    9340 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6365,"bootTime":1718010029,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:46:34.842112    9340 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:46:34.847137    9340 out.go:177] * [newest-cni-224000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:46:34.854106    9340 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:46:34.854166    9340 notify.go:220] Checking for updates...
	I0610 03:46:34.857896    9340 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:46:34.860995    9340 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:46:34.864050    9340 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:46:34.867052    9340 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:46:34.870052    9340 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:46:34.873385    9340 config.go:182] Loaded profile config "default-k8s-diff-port-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:46:34.873449    9340 config.go:182] Loaded profile config "multinode-763000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:46:34.873499    9340 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:46:34.878020    9340 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 03:46:34.885021    9340 start.go:297] selected driver: qemu2
	I0610 03:46:34.885026    9340 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:46:34.885032    9340 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:46:34.887360    9340 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0610 03:46:34.887383    9340 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0610 03:46:34.898972    9340 out.go:177] * Automatically selected the socket_vmnet network
	I0610 03:46:34.902155    9340 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0610 03:46:34.902205    9340 cni.go:84] Creating CNI manager for ""
	I0610 03:46:34.902213    9340 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:46:34.902225    9340 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 03:46:34.902259    9340 start.go:340] cluster config:
	{Name:newest-cni-224000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-224000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:46:34.907213    9340 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:46:34.915005    9340 out.go:177] * Starting "newest-cni-224000" primary control-plane node in "newest-cni-224000" cluster
	I0610 03:46:34.919050    9340 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:46:34.919066    9340 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:46:34.919083    9340 cache.go:56] Caching tarball of preloaded images
	I0610 03:46:34.919144    9340 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:46:34.919151    9340 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:46:34.919218    9340 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/newest-cni-224000/config.json ...
	I0610 03:46:34.919230    9340 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/newest-cni-224000/config.json: {Name:mk3f1f42e172786f7c0775a479b4e2c7f4665808 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:46:34.919700    9340 start.go:360] acquireMachinesLock for newest-cni-224000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:46:34.919736    9340 start.go:364] duration metric: took 30.25µs to acquireMachinesLock for "newest-cni-224000"
	I0610 03:46:34.919749    9340 start.go:93] Provisioning new machine with config: &{Name:newest-cni-224000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:newest-cni-224000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:46:34.919791    9340 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:46:34.928077    9340 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 03:46:34.945873    9340 start.go:159] libmachine.API.Create for "newest-cni-224000" (driver="qemu2")
	I0610 03:46:34.945901    9340 client.go:168] LocalClient.Create starting
	I0610 03:46:34.945964    9340 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:46:34.945995    9340 main.go:141] libmachine: Decoding PEM data...
	I0610 03:46:34.946008    9340 main.go:141] libmachine: Parsing certificate...
	I0610 03:46:34.946050    9340 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:46:34.946073    9340 main.go:141] libmachine: Decoding PEM data...
	I0610 03:46:34.946081    9340 main.go:141] libmachine: Parsing certificate...
	I0610 03:46:34.946639    9340 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:46:35.092203    9340 main.go:141] libmachine: Creating SSH key...
	I0610 03:46:35.116977    9340 main.go:141] libmachine: Creating Disk image...
	I0610 03:46:35.116982    9340 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:46:35.117157    9340 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/disk.qcow2
	I0610 03:46:35.129859    9340 main.go:141] libmachine: STDOUT: 
	I0610 03:46:35.129883    9340 main.go:141] libmachine: STDERR: 
	I0610 03:46:35.129941    9340 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/disk.qcow2 +20000M
	I0610 03:46:35.140751    9340 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:46:35.140773    9340 main.go:141] libmachine: STDERR: 
	I0610 03:46:35.140795    9340 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/disk.qcow2
	I0610 03:46:35.140808    9340 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:46:35.140842    9340 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:25:de:4b:b1:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/disk.qcow2
	I0610 03:46:35.142499    9340 main.go:141] libmachine: STDOUT: 
	I0610 03:46:35.142513    9340 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:46:35.142536    9340 client.go:171] duration metric: took 196.627542ms to LocalClient.Create
	I0610 03:46:37.144760    9340 start.go:128] duration metric: took 2.224924708s to createHost
	I0610 03:46:37.144810    9340 start.go:83] releasing machines lock for "newest-cni-224000", held for 2.225040375s
	W0610 03:46:37.144871    9340 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:46:37.165615    9340 out.go:177] * Deleting "newest-cni-224000" in qemu2 ...
	W0610 03:46:37.218799    9340 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:46:37.218831    9340 start.go:728] Will try again in 5 seconds ...
	I0610 03:46:42.220892    9340 start.go:360] acquireMachinesLock for newest-cni-224000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:46:42.221350    9340 start.go:364] duration metric: took 343.291µs to acquireMachinesLock for "newest-cni-224000"
	I0610 03:46:42.221464    9340 start.go:93] Provisioning new machine with config: &{Name:newest-cni-224000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.1 ClusterName:newest-cni-224000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 03:46:42.221800    9340 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 03:46:42.231418    9340 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 03:46:42.280756    9340 start.go:159] libmachine.API.Create for "newest-cni-224000" (driver="qemu2")
	I0610 03:46:42.280813    9340 client.go:168] LocalClient.Create starting
	I0610 03:46:42.280926    9340 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/ca.pem
	I0610 03:46:42.280996    9340 main.go:141] libmachine: Decoding PEM data...
	I0610 03:46:42.281017    9340 main.go:141] libmachine: Parsing certificate...
	I0610 03:46:42.281078    9340 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19046-4812/.minikube/certs/cert.pem
	I0610 03:46:42.281135    9340 main.go:141] libmachine: Decoding PEM data...
	I0610 03:46:42.281152    9340 main.go:141] libmachine: Parsing certificate...
	I0610 03:46:42.281685    9340 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso...
	I0610 03:46:42.434693    9340 main.go:141] libmachine: Creating SSH key...
	I0610 03:46:42.511404    9340 main.go:141] libmachine: Creating Disk image...
	I0610 03:46:42.511418    9340 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 03:46:42.511592    9340 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/disk.qcow2.raw /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/disk.qcow2
	I0610 03:46:42.524201    9340 main.go:141] libmachine: STDOUT: 
	I0610 03:46:42.524220    9340 main.go:141] libmachine: STDERR: 
	I0610 03:46:42.524266    9340 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/disk.qcow2 +20000M
	I0610 03:46:42.535401    9340 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 03:46:42.535417    9340 main.go:141] libmachine: STDERR: 
	I0610 03:46:42.535433    9340 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/disk.qcow2
	I0610 03:46:42.535438    9340 main.go:141] libmachine: Starting QEMU VM...
	I0610 03:46:42.535472    9340 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:c3:37:64:8c:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/disk.qcow2
	I0610 03:46:42.537156    9340 main.go:141] libmachine: STDOUT: 
	I0610 03:46:42.537171    9340 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:46:42.537183    9340 client.go:171] duration metric: took 256.361459ms to LocalClient.Create
	I0610 03:46:44.539466    9340 start.go:128] duration metric: took 2.317561583s to createHost
	I0610 03:46:44.539523    9340 start.go:83] releasing machines lock for "newest-cni-224000", held for 2.318125833s
	W0610 03:46:44.539867    9340 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-224000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-224000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:46:44.549147    9340 out.go:177] 
	W0610 03:46:44.555422    9340 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:46:44.555449    9340 out.go:239] * 
	* 
	W0610 03:46:44.557972    9340 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:46:44.569368    9340 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-224000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-224000 -n newest-cni-224000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-224000 -n newest-cni-224000: exit status 7 (71.038666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-224000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-310000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-310000 -n default-k8s-diff-port-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-310000 -n default-k8s-diff-port-310000: exit status 7 (31.468625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-310000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-310000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-310000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.832666ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-310000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-310000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-310000 -n default-k8s-diff-port-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-310000 -n default-k8s-diff-port-310000: exit status 7 (29.287833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-310000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-310000 -n default-k8s-diff-port-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-310000 -n default-k8s-diff-port-310000: exit status 7 (28.634375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-310000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-310000 --alsologtostderr -v=1: exit status 83 (41.942417ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-310000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-310000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:46:37.464922    9364 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:46:37.465099    9364 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:46:37.465102    9364 out.go:304] Setting ErrFile to fd 2...
	I0610 03:46:37.465104    9364 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:46:37.465245    9364 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:46:37.465457    9364 out.go:298] Setting JSON to false
	I0610 03:46:37.465465    9364 mustload.go:65] Loading cluster: default-k8s-diff-port-310000
	I0610 03:46:37.465658    9364 config.go:182] Loaded profile config "default-k8s-diff-port-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:46:37.470537    9364 out.go:177] * The control-plane node default-k8s-diff-port-310000 host is not running: state=Stopped
	I0610 03:46:37.474587    9364 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-310000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-310000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-310000 -n default-k8s-diff-port-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-310000 -n default-k8s-diff-port-310000: exit status 7 (29.528917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-310000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-310000 -n default-k8s-diff-port-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-310000 -n default-k8s-diff-port-310000: exit status 7 (29.072542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-224000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-224000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1: exit status 80 (5.179324666s)

                                                
                                                
-- stdout --
	* [newest-cni-224000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-224000" primary control-plane node in "newest-cni-224000" cluster
	* Restarting existing qemu2 VM for "newest-cni-224000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-224000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:46:47.021678    9411 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:46:47.021813    9411 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:46:47.021815    9411 out.go:304] Setting ErrFile to fd 2...
	I0610 03:46:47.021818    9411 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:46:47.021963    9411 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:46:47.022985    9411 out.go:298] Setting JSON to false
	I0610 03:46:47.039318    9411 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6378,"bootTime":1718010029,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:46:47.039381    9411 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:46:47.043921    9411 out.go:177] * [newest-cni-224000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:46:47.049836    9411 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:46:47.049881    9411 notify.go:220] Checking for updates...
	I0610 03:46:47.053801    9411 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:46:47.057792    9411 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:46:47.060819    9411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:46:47.063810    9411 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:46:47.066783    9411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:46:47.070140    9411 config.go:182] Loaded profile config "newest-cni-224000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:46:47.070420    9411 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:46:47.074813    9411 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 03:46:47.081759    9411 start.go:297] selected driver: qemu2
	I0610 03:46:47.081765    9411 start.go:901] validating driver "qemu2" against &{Name:newest-cni-224000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:newest-cni-224000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:46:47.081832    9411 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:46:47.084214    9411 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0610 03:46:47.084255    9411 cni.go:84] Creating CNI manager for ""
	I0610 03:46:47.084263    9411 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:46:47.084304    9411 start.go:340] cluster config:
	{Name:newest-cni-224000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-224000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:46:47.088655    9411 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:46:47.095766    9411 out.go:177] * Starting "newest-cni-224000" primary control-plane node in "newest-cni-224000" cluster
	I0610 03:46:47.099820    9411 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:46:47.099832    9411 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:46:47.099846    9411 cache.go:56] Caching tarball of preloaded images
	I0610 03:46:47.099896    9411 preload.go:173] Found /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 03:46:47.099901    9411 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:46:47.099953    9411 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/newest-cni-224000/config.json ...
	I0610 03:46:47.100450    9411 start.go:360] acquireMachinesLock for newest-cni-224000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:46:47.100479    9411 start.go:364] duration metric: took 22.959µs to acquireMachinesLock for "newest-cni-224000"
	I0610 03:46:47.100487    9411 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:46:47.100493    9411 fix.go:54] fixHost starting: 
	I0610 03:46:47.100615    9411 fix.go:112] recreateIfNeeded on newest-cni-224000: state=Stopped err=<nil>
	W0610 03:46:47.100623    9411 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:46:47.104775    9411 out.go:177] * Restarting existing qemu2 VM for "newest-cni-224000" ...
	I0610 03:46:47.112664    9411 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:c3:37:64:8c:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/disk.qcow2
	I0610 03:46:47.114574    9411 main.go:141] libmachine: STDOUT: 
	I0610 03:46:47.114594    9411 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:46:47.114623    9411 fix.go:56] duration metric: took 14.128542ms for fixHost
	I0610 03:46:47.114628    9411 start.go:83] releasing machines lock for "newest-cni-224000", held for 14.143833ms
	W0610 03:46:47.114635    9411 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:46:47.114672    9411 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:46:47.114677    9411 start.go:728] Will try again in 5 seconds ...
	I0610 03:46:52.117037    9411 start.go:360] acquireMachinesLock for newest-cni-224000: {Name:mke7087c0f34421ddb9d489cbefdaf460f28d311 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 03:46:52.117449    9411 start.go:364] duration metric: took 308µs to acquireMachinesLock for "newest-cni-224000"
	I0610 03:46:52.117608    9411 start.go:96] Skipping create...Using existing machine configuration
	I0610 03:46:52.117631    9411 fix.go:54] fixHost starting: 
	I0610 03:46:52.118366    9411 fix.go:112] recreateIfNeeded on newest-cni-224000: state=Stopped err=<nil>
	W0610 03:46:52.118394    9411 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 03:46:52.122736    9411 out.go:177] * Restarting existing qemu2 VM for "newest-cni-224000" ...
	I0610 03:46:52.131080    9411 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:c3:37:64:8c:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19046-4812/.minikube/machines/newest-cni-224000/disk.qcow2
	I0610 03:46:52.140957    9411 main.go:141] libmachine: STDOUT: 
	I0610 03:46:52.141017    9411 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 03:46:52.141120    9411 fix.go:56] duration metric: took 23.4925ms for fixHost
	I0610 03:46:52.141133    9411 start.go:83] releasing machines lock for "newest-cni-224000", held for 23.663167ms
	W0610 03:46:52.141314    9411 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-224000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-224000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 03:46:52.148868    9411 out.go:177] 
	W0610 03:46:52.150206    9411 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 03:46:52.150239    9411 out.go:239] * 
	* 
	W0610 03:46:52.152574    9411 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:46:52.159878    9411 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-224000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-224000 -n newest-cni-224000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-224000 -n newest-cni-224000: exit status 7 (68.385458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-224000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-224000 image list --format=json
start_stop_delete_test.go:304: v1.30.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.1",
- 	"registry.k8s.io/kube-controller-manager:v1.30.1",
- 	"registry.k8s.io/kube-proxy:v1.30.1",
- 	"registry.k8s.io/kube-scheduler:v1.30.1",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-224000 -n newest-cni-224000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-224000 -n newest-cni-224000: exit status 7 (30.53525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-224000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-224000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-224000 --alsologtostderr -v=1: exit status 83 (41.97ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-224000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-224000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:46:52.345007    9428 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:46:52.345172    9428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:46:52.345175    9428 out.go:304] Setting ErrFile to fd 2...
	I0610 03:46:52.345177    9428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:46:52.345315    9428 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:46:52.345569    9428 out.go:298] Setting JSON to false
	I0610 03:46:52.345575    9428 mustload.go:65] Loading cluster: newest-cni-224000
	I0610 03:46:52.345769    9428 config.go:182] Loaded profile config "newest-cni-224000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:46:52.349785    9428 out.go:177] * The control-plane node newest-cni-224000 host is not running: state=Stopped
	I0610 03:46:52.353814    9428 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-224000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-224000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-224000 -n newest-cni-224000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-224000 -n newest-cni-224000: exit status 7 (29.334542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-224000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-224000 -n newest-cni-224000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-224000 -n newest-cni-224000: exit status 7 (30.118625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-224000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.22
12 TestDownloadOnly/v1.30.1/json-events 13.2
13 TestDownloadOnly/v1.30.1/preload-exists 0
16 TestDownloadOnly/v1.30.1/kubectl 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.08
18 TestDownloadOnly/v1.30.1/DeleteAll 0.23
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.23
21 TestBinaryMirror 0.32
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
35 TestHyperKitDriverInstallOrUpdate 9.41
39 TestErrorSpam/start 0.38
40 TestErrorSpam/status 0.09
41 TestErrorSpam/pause 0.13
42 TestErrorSpam/unpause 0.12
43 TestErrorSpam/stop 10.27
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.13
55 TestFunctional/serial/CacheCmd/cache/add_local 1.19
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.03
60 TestFunctional/serial/CacheCmd/cache/delete 0.07
69 TestFunctional/parallel/ConfigCmd 0.23
71 TestFunctional/parallel/DryRun 0.27
72 TestFunctional/parallel/InternationalLanguage 0.11
78 TestFunctional/parallel/AddonsCmd 0.12
93 TestFunctional/parallel/License 0.63
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 2.98
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
126 TestFunctional/parallel/ProfileCmd/profile_list 0.1
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.1
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
135 TestFunctional/delete_addon-resizer_images 0.12
136 TestFunctional/delete_my-image_image 0.04
137 TestFunctional/delete_minikube_cached_images 0.04
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 3.55
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.32
193 TestMainNoArgs 0.03
240 TestStoppedBinaryUpgrade/Setup 2.39
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
257 TestNoKubernetes/serial/ProfileList 31.43
258 TestNoKubernetes/serial/Stop 2
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
270 TestStoppedBinaryUpgrade/MinikubeLogs 0.93
275 TestStartStop/group/old-k8s-version/serial/Stop 1.88
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.1
286 TestStartStop/group/no-preload/serial/Stop 3.09
287 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
299 TestStartStop/group/embed-certs/serial/Stop 3.43
300 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
304 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.99
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
319 TestStartStop/group/newest-cni/serial/Stop 2.16
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-625000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-625000: exit status 85 (90.707542ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-625000 | jenkins | v1.33.1 | 10 Jun 24 03:19 PDT |          |
	|         | -p download-only-625000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 03:19:39
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 03:19:39.444787    5689 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:19:39.444950    5689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:19:39.444954    5689 out.go:304] Setting ErrFile to fd 2...
	I0610 03:19:39.444956    5689 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:19:39.445105    5689 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	W0610 03:19:39.445195    5689 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19046-4812/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19046-4812/.minikube/config/config.json: no such file or directory
	I0610 03:19:39.446446    5689 out.go:298] Setting JSON to true
	I0610 03:19:39.464250    5689 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4750,"bootTime":1718010029,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:19:39.464337    5689 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:19:39.482816    5689 out.go:97] [download-only-625000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:19:39.485738    5689 out.go:169] MINIKUBE_LOCATION=19046
	I0610 03:19:39.482936    5689 notify.go:220] Checking for updates...
	W0610 03:19:39.482955    5689 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball: no such file or directory
	I0610 03:19:39.517879    5689 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:19:39.521723    5689 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:19:39.524794    5689 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:19:39.529713    5689 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	W0610 03:19:39.535721    5689 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 03:19:39.535978    5689 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:19:39.537402    5689 out.go:97] Using the qemu2 driver based on user configuration
	I0610 03:19:39.537427    5689 start.go:297] selected driver: qemu2
	I0610 03:19:39.537441    5689 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:19:39.537534    5689 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:19:39.541728    5689 out.go:169] Automatically selected the socket_vmnet network
	I0610 03:19:39.548315    5689 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0610 03:19:39.548423    5689 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 03:19:39.548486    5689 cni.go:84] Creating CNI manager for ""
	I0610 03:19:39.548503    5689 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0610 03:19:39.548556    5689 start.go:340] cluster config:
	{Name:download-only-625000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-625000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:19:39.553977    5689 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:19:39.558746    5689 out.go:97] Downloading VM boot image ...
	I0610 03:19:39.558779    5689 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/iso/arm64/minikube-v1.33.1-1717668912-19038-arm64.iso
	I0610 03:19:47.502943    5689 out.go:97] Starting "download-only-625000" primary control-plane node in "download-only-625000" cluster
	I0610 03:19:47.502967    5689 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 03:19:47.624198    5689 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0610 03:19:47.624231    5689 cache.go:56] Caching tarball of preloaded images
	I0610 03:19:47.624464    5689 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 03:19:47.628756    5689 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0610 03:19:47.628769    5689 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 03:19:47.851975    5689 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0610 03:19:57.979424    5689 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 03:19:57.979586    5689 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 03:19:58.674040    5689 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0610 03:19:58.674261    5689 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/download-only-625000/config.json ...
	I0610 03:19:58.674279    5689 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/download-only-625000/config.json: {Name:mk679a6fd591b26c67bafaaf1438ab6da55259f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:19:58.675322    5689 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0610 03:19:58.675518    5689 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0610 03:19:59.026578    5689 out.go:169] 
	W0610 03:19:59.031486    5689 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19046-4812/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108ced900 0x108ced900 0x108ced900 0x108ced900 0x108ced900 0x108ced900 0x108ced900] Decompressors:map[bz2:0x1400000fd70 gz:0x1400000fd78 tar:0x1400000fcc0 tar.bz2:0x1400000fce0 tar.gz:0x1400000fd20 tar.xz:0x1400000fd30 tar.zst:0x1400000fd60 tbz2:0x1400000fce0 tgz:0x1400000fd20 txz:0x1400000fd30 tzst:0x1400000fd60 xz:0x1400000fd80 zip:0x1400000fdb0 zst:0x1400000fd88] Getters:map[file:0x140016c45c0 http:0x140005a2230 https:0x140005a2280] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0610 03:19:59.031505    5689 out_reason.go:110] 
	W0610 03:19:59.039508    5689 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 03:19:59.042490    5689 out.go:169] 
	
	
	* The control-plane node download-only-625000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-625000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-625000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (13.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-069000 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-069000 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=qemu2 : (13.199749541s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (13.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
--- PASS: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-069000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-069000: exit status 85 (77.932833ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-625000 | jenkins | v1.33.1 | 10 Jun 24 03:19 PDT |                     |
	|         | -p download-only-625000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 10 Jun 24 03:19 PDT | 10 Jun 24 03:19 PDT |
	| delete  | -p download-only-625000        | download-only-625000 | jenkins | v1.33.1 | 10 Jun 24 03:19 PDT | 10 Jun 24 03:19 PDT |
	| start   | -o=json --download-only        | download-only-069000 | jenkins | v1.33.1 | 10 Jun 24 03:19 PDT |                     |
	|         | -p download-only-069000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 03:19:59
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.3 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 03:19:59.691110    5726 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:19:59.691471    5726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:19:59.691476    5726 out.go:304] Setting ErrFile to fd 2...
	I0610 03:19:59.691478    5726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:19:59.691660    5726 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:19:59.693005    5726 out.go:298] Setting JSON to true
	I0610 03:19:59.709312    5726 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4770,"bootTime":1718010029,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:19:59.709371    5726 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:19:59.714116    5726 out.go:97] [download-only-069000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:19:59.718065    5726 out.go:169] MINIKUBE_LOCATION=19046
	I0610 03:19:59.714224    5726 notify.go:220] Checking for updates...
	I0610 03:19:59.724070    5726 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:19:59.727101    5726 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:19:59.730190    5726 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:19:59.733094    5726 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	W0610 03:19:59.739110    5726 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 03:19:59.739310    5726 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:19:59.742097    5726 out.go:97] Using the qemu2 driver based on user configuration
	I0610 03:19:59.742107    5726 start.go:297] selected driver: qemu2
	I0610 03:19:59.742111    5726 start.go:901] validating driver "qemu2" against <nil>
	I0610 03:19:59.742159    5726 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 03:19:59.745074    5726 out.go:169] Automatically selected the socket_vmnet network
	I0610 03:19:59.749898    5726 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0610 03:19:59.749998    5726 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 03:19:59.750031    5726 cni.go:84] Creating CNI manager for ""
	I0610 03:19:59.750038    5726 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 03:19:59.750043    5726 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 03:19:59.750077    5726 start.go:340] cluster config:
	{Name:download-only-069000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-069000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:19:59.754193    5726 iso.go:125] acquiring lock: {Name:mkd6b7fd6345aa4ff4dfd5c8a3a6e5c1bbeb9474 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 03:19:59.757110    5726 out.go:97] Starting "download-only-069000" primary control-plane node in "download-only-069000" cluster
	I0610 03:19:59.757116    5726 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:19:59.973749    5726 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:19:59.973879    5726 cache.go:56] Caching tarball of preloaded images
	I0610 03:19:59.974741    5726 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:19:59.979741    5726 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0610 03:19:59.979767    5726 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 ...
	I0610 03:20:00.189790    5726 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4?checksum=md5:7ffd0655905ace939b15286e37914582 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0610 03:20:07.999153    5726 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 ...
	I0610 03:20:07.999319    5726 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 ...
	I0610 03:20:08.541901    5726 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0610 03:20:08.542105    5726 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/download-only-069000/config.json ...
	I0610 03:20:08.542121    5726 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19046-4812/.minikube/profiles/download-only-069000/config.json: {Name:mke9082e1777643d60d2937a222c34969a5eedb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 03:20:08.542368    5726 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0610 03:20:08.542489    5726 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19046-4812/.minikube/cache/darwin/arm64/v1.30.1/kubectl
	
	
	* The control-plane node download-only-069000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-069000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-069000
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.32s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-699000 --alsologtostderr --binary-mirror http://127.0.0.1:50868 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-699000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-699000
--- PASS: TestBinaryMirror (0.32s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-028000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-028000: exit status 85 (57.292833ms)

                                                
                                                
-- stdout --
	* Profile "addons-028000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-028000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-028000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-028000: exit status 85 (53.44325ms)

                                                
                                                
-- stdout --
	* Profile "addons-028000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-028000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.41s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.41s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 status: exit status 7 (31.537834ms)

                                                
                                                
-- stdout --
	nospam-302000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 status: exit status 7 (29.317041ms)

                                                
                                                
-- stdout --
	nospam-302000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 status: exit status 7 (30.015708ms)

                                                
                                                
-- stdout --
	nospam-302000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 pause: exit status 83 (40.309917ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-302000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-302000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 pause: exit status 83 (44.949416ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-302000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-302000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 pause: exit status 83 (39.719041ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-302000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-302000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 unpause: exit status 83 (40.603041ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-302000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-302000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 unpause: exit status 83 (39.886708ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-302000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-302000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 unpause: exit status 83 (39.820791ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-302000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-302000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (10.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 stop: (3.943852708s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 stop: (3.235860125s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-302000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-302000 stop: (3.086468s)
--- PASS: TestErrorSpam/stop (10.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19046-4812/.minikube/files/etc/test/nested/copy/5687/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-878000 cache add registry.k8s.io/pause:3.1: (1.122454375s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-878000 cache add registry.k8s.io/pause:3.3: (1.096440542s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-878000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3257078516/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 cache add minikube-local-cache-test:functional-878000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 cache delete minikube-local-cache-test:functional-878000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-878000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 config get cpus: exit status 14 (29.95475ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 config get cpus: exit status 14 (38.091292ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-878000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-878000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (156.80025ms)

                                                
                                                
-- stdout --
	* [functional-878000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:22:01.517007    6341 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:22:01.517196    6341 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:22:01.517200    6341 out.go:304] Setting ErrFile to fd 2...
	I0610 03:22:01.517203    6341 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:22:01.517403    6341 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:22:01.518692    6341 out.go:298] Setting JSON to false
	I0610 03:22:01.538423    6341 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4892,"bootTime":1718010029,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:22:01.538490    6341 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:22:01.542718    6341 out.go:177] * [functional-878000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	I0610 03:22:01.550720    6341 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:22:01.553611    6341 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:22:01.550773    6341 notify.go:220] Checking for updates...
	I0610 03:22:01.556655    6341 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:22:01.559644    6341 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:22:01.562585    6341 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:22:01.565609    6341 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:22:01.569000    6341 config.go:182] Loaded profile config "functional-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:22:01.569284    6341 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:22:01.573624    6341 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 03:22:01.580688    6341 start.go:297] selected driver: qemu2
	I0610 03:22:01.580694    6341 start.go:901] validating driver "qemu2" against &{Name:functional-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:22:01.580744    6341 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:22:01.587594    6341 out.go:177] 
	W0610 03:22:01.591585    6341 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0610 03:22:01.594520    6341 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-878000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-878000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-878000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (109.989375ms)

                                                
                                                
-- stdout --
	* [functional-878000] minikube v1.33.1 sur Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 03:22:01.740204    6352 out.go:291] Setting OutFile to fd 1 ...
	I0610 03:22:01.740309    6352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:22:01.740313    6352 out.go:304] Setting ErrFile to fd 2...
	I0610 03:22:01.740315    6352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 03:22:01.740452    6352 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19046-4812/.minikube/bin
	I0610 03:22:01.741810    6352 out.go:298] Setting JSON to false
	I0610 03:22:01.758575    6352 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4892,"bootTime":1718010029,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0610 03:22:01.758672    6352 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0610 03:22:01.763637    6352 out.go:177] * [functional-878000] minikube v1.33.1 sur Darwin 14.4.1 (arm64)
	I0610 03:22:01.768584    6352 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 03:22:01.772673    6352 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	I0610 03:22:01.768638    6352 notify.go:220] Checking for updates...
	I0610 03:22:01.778605    6352 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 03:22:01.781647    6352 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 03:22:01.782928    6352 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	I0610 03:22:01.785620    6352 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 03:22:01.789004    6352 config.go:182] Loaded profile config "functional-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0610 03:22:01.789278    6352 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 03:22:01.793438    6352 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0610 03:22:01.800667    6352 start.go:297] selected driver: qemu2
	I0610 03:22:01.800673    6352 start.go:901] validating driver "qemu2" against &{Name:functional-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.1 ClusterName:functional-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 03:22:01.800735    6352 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 03:22:01.807599    6352 out.go:177] 
	W0610 03:22:01.811628    6352 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0610 03:22:01.815656    6352 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.946073334s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-878000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.98s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-878000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 image rm gcr.io/google-containers/addon-resizer:functional-878000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-878000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 image save --daemon gcr.io/google-containers/addon-resizer:functional-878000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-878000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "70.80075ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "33.036333ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "70.818792ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "33.689709ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.013316041s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-878000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-878000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-878000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-878000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.55s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-997000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-997000 --output=json --user=testUser: (3.551493125s)
--- PASS: TestJSONOutput/stop/Command (3.55s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-480000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-480000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.891417ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"77f1f7fc-b5eb-4c3e-92aa-03867b15f575","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-480000] minikube v1.33.1 on Darwin 14.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d02a2924-6479-4a2c-8ce4-e2e2b8f8568a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19046"}}
	{"specversion":"1.0","id":"c8b56910-81b1-44c3-b1d8-c29595e36197","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig"}}
	{"specversion":"1.0","id":"bf20d59d-366b-4269-b1f2-a8c3894ae78c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"2bdf7764-6cac-4ee7-8a84-526c122d2d30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4e443591-acb8-48a6-9b25-932250012c5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube"}}
	{"specversion":"1.0","id":"0bf8d63e-ffcc-48e7-ba9e-270a10a1272a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b2841919-8c94-4d1d-8016-15826644e169","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-480000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-480000
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-168000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-168000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (96.638292ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-168000] minikube v1.33.1 on Darwin 14.4.1 (arm64)
	  - MINIKUBE_LOCATION=19046
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19046-4812/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19046-4812/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-168000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-168000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (39.073875ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-168000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-168000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.697395583s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.7370485s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-168000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-168000: (1.999447208s)
--- PASS: TestNoKubernetes/serial/Stop (2.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-168000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-168000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.645834ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-168000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-168000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-390000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-407000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-407000 --alsologtostderr -v=3: (1.880446125s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-407000 -n old-k8s-version-407000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-407000 -n old-k8s-version-407000: exit status 7 (35.968875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-407000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-394000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-394000 --alsologtostderr -v=3: (3.087526458s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-394000 -n no-preload-394000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-394000 -n no-preload-394000: exit status 7 (53.945416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-394000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-543000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-543000 --alsologtostderr -v=3: (3.434163083s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-543000 -n embed-certs-543000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-543000 -n embed-certs-543000: exit status 7 (54.09475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-543000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-310000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-310000 --alsologtostderr -v=3: (1.989727375s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-310000 -n default-k8s-diff-port-310000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-310000 -n default-k8s-diff-port-310000: exit status 7 (56.536833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-310000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-224000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-224000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-224000 --alsologtostderr -v=3: (2.158481375s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-224000 -n newest-cni-224000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-224000 -n newest-cni-224000: exit status 7 (57.026041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-224000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-878000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1426907304/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1718014883213047000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1426907304/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1718014883213047000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1426907304/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1718014883213047000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1426907304/001/test-1718014883213047000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (44.920291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.638333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.073167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.997917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.214292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.664459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.9315ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "sudo umount -f /mount-9p": exit status 83 (45.974084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-878000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-878000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1426907304/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (12.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (12.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-878000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1591292243/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (61.910875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (82.044708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.267292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.397292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.428583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.83125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.272042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "sudo umount -f /mount-9p": exit status 83 (47.67475ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-878000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-878000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1591292243/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (12.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (13.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-878000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup841309161/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-878000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup841309161/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-878000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup841309161/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T" /mount1: exit status 83 (76.943ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T" /mount1: exit status 83 (81.908416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T" /mount1: exit status 83 (87.045458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T" /mount1: exit status 83 (81.817459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T" /mount1: exit status 83 (85.95925ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T" /mount1: exit status 83 (85.7255ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-878000 ssh "findmnt -T" /mount1: exit status 83 (87.91425ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-878000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-878000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup841309161/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-878000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup841309161/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-878000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup841309161/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (13.13s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-811000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-811000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-811000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-811000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811000"

                                                
                                                
----------------------- debugLogs end: cilium-811000 [took: 2.165116417s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-811000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-811000
--- SKIP: TestNetworkPlugins/group/cilium (2.39s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-059000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-059000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard